2026-04-09 00:00:12.035512 | Job console starting 2026-04-09 00:00:12.056045 | Updating git repos 2026-04-09 00:00:12.149373 | Cloning repos into workspace 2026-04-09 00:00:12.555765 | Restoring repo states 2026-04-09 00:00:12.613979 | Merging changes 2026-04-09 00:00:12.614002 | Checking out repos 2026-04-09 00:00:13.411632 | Preparing playbooks 2026-04-09 00:00:15.512165 | Running Ansible setup 2026-04-09 00:00:23.138211 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-09 00:00:24.153433 | 2026-04-09 00:00:24.153549 | PLAY [Base pre] 2026-04-09 00:00:24.200835 | 2026-04-09 00:00:24.200946 | TASK [Setup log path fact] 2026-04-09 00:00:24.238659 | orchestrator | ok 2026-04-09 00:00:24.268618 | 2026-04-09 00:00:24.268757 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 00:00:24.326365 | orchestrator | ok 2026-04-09 00:00:24.336298 | 2026-04-09 00:00:24.336389 | TASK [emit-job-header : Print job information] 2026-04-09 00:00:24.465159 | # Job Information 2026-04-09 00:00:24.465351 | Ansible Version: 2.16.14 2026-04-09 00:00:24.465383 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-09 00:00:24.465416 | Pipeline: periodic-midnight 2026-04-09 00:00:24.465440 | Executor: 521e9411259a 2026-04-09 00:00:24.465456 | Triggered by: https://github.com/osism/testbed 2026-04-09 00:00:24.465474 | Event ID: 229a3ccad3314f149ff7c6cbe4e5e7b7 2026-04-09 00:00:24.474620 | 2026-04-09 00:00:24.474715 | LOOP [emit-job-header : Print node information] 2026-04-09 00:00:24.746709 | orchestrator | ok: 2026-04-09 00:00:24.746929 | orchestrator | # Node Information 2026-04-09 00:00:24.746960 | orchestrator | Inventory Hostname: orchestrator 2026-04-09 00:00:24.746980 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-09 00:00:24.746998 | orchestrator | Username: zuul-testbed02 2026-04-09 00:00:24.747015 | orchestrator | Distro: Debian 12.13 2026-04-09 00:00:24.747034 | orchestrator | Provider: static-testbed 2026-04-09 00:00:24.747051 | orchestrator | Region: 2026-04-09 00:00:24.747068 | orchestrator | Label: testbed-orchestrator 2026-04-09 00:00:24.747084 | orchestrator | Product Name: OpenStack Nova 2026-04-09 00:00:24.747100 | orchestrator | Interface IP: 81.163.193.140 2026-04-09 00:00:24.758982 | 2026-04-09 00:00:24.759071 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:25.793349 | orchestrator -> localhost | changed 2026-04-09 00:00:25.800035 | 2026-04-09 00:00:25.800122 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-09 00:00:29.060451 | orchestrator -> localhost | changed 2026-04-09 00:00:29.076306 | 2026-04-09 00:00:29.076411 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-09 00:00:30.259171 | orchestrator -> localhost | ok 2026-04-09 00:00:30.268085 | 2026-04-09 00:00:30.268186 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-09 00:00:30.316221 | orchestrator | ok 2026-04-09 00:00:30.360907 | orchestrator | included: /var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-09 00:00:30.430140 | 2026-04-09 00:00:30.430242 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-09 00:00:33.446289 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-09 00:00:33.446449 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/37b66db7046a43208c813cef6fe11a97_id_rsa 2026-04-09 00:00:33.446480 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/37b66db7046a43208c813cef6fe11a97_id_rsa.pub 2026-04-09 00:00:33.446502 | orchestrator -> localhost | The key fingerprint is: 2026-04-09 00:00:33.446522 | orchestrator -> localhost | SHA256:aIPFFNalvd9/FiFzNprBmxmO2Y5wta2unbKfx98ALSA zuul-build-sshkey 2026-04-09 00:00:33.446541 | orchestrator -> localhost | The key's randomart image is: 2026-04-09 00:00:33.446568 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-09 00:00:33.446586 | orchestrator -> localhost | | +o .. | 2026-04-09 00:00:33.446604 | orchestrator -> localhost | | + .o | 2026-04-09 00:00:33.446620 | orchestrator -> localhost | | o E o . | 2026-04-09 00:00:33.446636 | orchestrator -> localhost | | o . . o O = | 2026-04-09 00:00:33.446652 | orchestrator -> localhost | | . + S . O ^ o| 2026-04-09 00:00:33.446673 | orchestrator -> localhost | | . . . = & o | 2026-04-09 00:00:33.446690 | orchestrator -> localhost | | o + = .| 2026-04-09 00:00:33.446707 | orchestrator -> localhost | | o.oo=+| 2026-04-09 00:00:33.446723 | orchestrator -> localhost | | o**..*| 2026-04-09 00:00:33.446761 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-09 00:00:33.446807 | orchestrator -> localhost | ok: Runtime: 0:00:01.770673 2026-04-09 00:00:33.452724 | 2026-04-09 00:00:33.452812 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-09 00:00:33.499900 | orchestrator | ok 2026-04-09 00:00:33.507978 | orchestrator | included: /var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-09 00:00:33.528043 | 2026-04-09 00:00:33.528136 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-09 00:00:33.642716 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:33.650950 | 2026-04-09 00:00:33.651044 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-09 00:00:34.742160 | orchestrator | changed 2026-04-09 00:00:34.747313 | 2026-04-09 00:00:34.753045 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-09 00:00:35.073867 | orchestrator | ok 2026-04-09 00:00:35.081706 | 2026-04-09 00:00:35.081809 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-09 00:00:35.591831 | orchestrator | ok 2026-04-09 00:00:35.601497 | 2026-04-09 00:00:35.601655 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-09 00:00:36.087115 | orchestrator | ok 2026-04-09 00:00:36.092718 | 2026-04-09 00:00:36.092816 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-09 00:00:36.127158 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:36.139517 | 2026-04-09 00:00:36.139626 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-09 00:00:37.427243 | orchestrator -> localhost | changed 2026-04-09 00:00:37.441052 | 2026-04-09 00:00:37.441154 | TASK [add-build-sshkey : Add back temp key] 2026-04-09 00:00:38.115177 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/37b66db7046a43208c813cef6fe11a97_id_rsa (zuul-build-sshkey) 2026-04-09 00:00:38.115355 | orchestrator -> localhost | ok: Runtime: 0:00:00.014709 2026-04-09 00:00:38.121149 | 2026-04-09 00:00:38.121235 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-09 00:00:38.721168 | orchestrator | ok 2026-04-09 00:00:38.725890 | 2026-04-09 00:00:38.725970 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-09 00:00:38.753325 | orchestrator | skipping: Conditional result was False 2026-04-09 00:00:38.893905 | 2026-04-09 00:00:38.894006 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-09 00:00:39.643373 | orchestrator | ok 2026-04-09 00:00:39.667413 | 2026-04-09 00:00:39.667516 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-09 00:00:39.740471 | orchestrator | ok 2026-04-09 00:00:39.746495 | 2026-04-09 00:00:39.746579 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-09 00:00:40.323202 | orchestrator -> localhost | ok 2026-04-09 00:00:40.335571 | 2026-04-09 00:00:40.335658 | TASK [validate-host : Collect information about the host] 2026-04-09 00:00:42.233255 | orchestrator | ok 2026-04-09 00:00:42.267078 | 2026-04-09 00:00:42.267204 | TASK [validate-host : Sanitize hostname] 2026-04-09 00:00:42.389279 | orchestrator | ok 2026-04-09 00:00:42.393668 | 2026-04-09 00:00:42.393757 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-09 00:00:43.859574 | orchestrator -> localhost | changed 2026-04-09 00:00:43.864651 | 2026-04-09 00:00:43.864740 | TASK [validate-host : Collect information about zuul worker] 2026-04-09 00:00:44.512333 | orchestrator | ok 2026-04-09 00:00:44.516819 | 2026-04-09 00:00:44.516905 | TASK [validate-host : Write out all zuul information for each host] 2026-04-09 00:00:45.943024 | orchestrator -> localhost | changed 2026-04-09 00:00:45.951440 | 2026-04-09 00:00:45.951523 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-09 00:00:46.272685 | orchestrator | ok 2026-04-09 00:00:46.277501 | 2026-04-09 00:00:46.277585 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-09 00:02:18.681614 | orchestrator | changed: 2026-04-09 00:02:18.683014 | orchestrator | .d..t...... src/ 2026-04-09 00:02:18.683068 | orchestrator | .d..t...... src/github.com/ 2026-04-09 00:02:18.683093 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-09 00:02:18.683115 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-09 00:02:18.683135 | orchestrator | RedHat.yml 2026-04-09 00:02:18.697832 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-09 00:02:18.697894 | orchestrator | RedHat.yml 2026-04-09 00:02:18.697955 | orchestrator | = 1.53.0"... 2026-04-09 00:02:30.208544 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-09 00:02:30.341988 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-09 00:02:31.119927 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:31.180854 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-09 00:02:31.858964 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-09 00:02:31.919874 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-09 00:02:32.504123 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-09 00:02:32.504206 | orchestrator | 2026-04-09 00:02:32.504215 | orchestrator | Providers are signed by their developers. 2026-04-09 00:02:32.504221 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-09 00:02:32.504225 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-09 00:02:32.504233 | orchestrator | 2026-04-09 00:02:32.504237 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-09 00:02:32.504242 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-09 00:02:32.504253 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-09 00:02:32.504257 | orchestrator | you run "tofu init" in the future. 2026-04-09 00:02:32.504476 | orchestrator | 2026-04-09 00:02:32.504496 | orchestrator | OpenTofu has been successfully initialized! 2026-04-09 00:02:32.504526 | orchestrator | 2026-04-09 00:02:32.504531 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-09 00:02:32.504535 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-09 00:02:32.504539 | orchestrator | should now work. 2026-04-09 00:02:32.504544 | orchestrator | 2026-04-09 00:02:32.504547 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-09 00:02:32.504551 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-09 00:02:32.504556 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-09 00:02:32.671708 | orchestrator | Created and switched to workspace "ci"! 2026-04-09 00:02:32.671760 | orchestrator | 2026-04-09 00:02:32.671766 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-09 00:02:32.671772 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-09 00:02:32.671793 | orchestrator | for this configuration. 2026-04-09 00:02:33.113876 | orchestrator | ci.auto.tfvars 2026-04-09 00:02:33.567640 | orchestrator | default_custom.tf 2026-04-09 00:02:34.639868 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-09 00:02:35.217576 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-09 00:02:35.566495 | orchestrator | 2026-04-09 00:02:35.566652 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-09 00:02:35.566674 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-09 00:02:35.566686 | orchestrator | + create 2026-04-09 00:02:35.566698 | orchestrator | <= read (data resources) 2026-04-09 00:02:35.566711 | orchestrator | 2026-04-09 00:02:35.566722 | orchestrator | OpenTofu will perform the following actions: 2026-04-09 00:02:35.566748 | orchestrator | 2026-04-09 00:02:35.566760 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-09 00:02:35.566771 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:35.566782 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-09 00:02:35.566794 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:35.566805 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:35.566816 | orchestrator | + file = (known after apply) 2026-04-09 00:02:35.566827 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.566868 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.566878 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:35.566888 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:35.566898 | orchestrator | + most_recent = true 2026-04-09 00:02:35.566909 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.566919 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:35.566929 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.566943 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:35.566952 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:35.566962 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:35.566971 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:35.566981 | orchestrator | } 2026-04-09 00:02:35.566991 | orchestrator | 2026-04-09 00:02:35.567000 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-09 00:02:35.567011 | orchestrator | # (config refers to values not yet known) 2026-04-09 00:02:35.567020 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-09 00:02:35.567030 | orchestrator | + checksum = (known after apply) 2026-04-09 00:02:35.567039 | orchestrator | + created_at = (known after apply) 2026-04-09 00:02:35.567049 | orchestrator | + file = (known after apply) 2026-04-09 00:02:35.567059 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.567069 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.567078 | orchestrator | + min_disk_gb = (known after apply) 2026-04-09 00:02:35.567087 | orchestrator | + min_ram_mb = (known after apply) 2026-04-09 00:02:35.567097 | orchestrator | + most_recent = true 2026-04-09 00:02:35.567107 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.567117 | orchestrator | + protected = (known after apply) 2026-04-09 00:02:35.567126 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.567136 | orchestrator | + schema = (known after apply) 2026-04-09 00:02:35.567145 | orchestrator | + size_bytes = (known after apply) 2026-04-09 00:02:35.567155 | orchestrator | + tags = (known after apply) 2026-04-09 00:02:35.567164 | orchestrator | + updated_at = (known after apply) 2026-04-09 00:02:35.567174 | orchestrator | } 2026-04-09 00:02:35.567184 | orchestrator | 2026-04-09 00:02:35.567193 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-09 00:02:35.567211 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-09 00:02:35.567227 | orchestrator | + content = (known after apply) 2026-04-09 00:02:35.567245 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:35.567260 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:35.567276 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:35.567291 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:35.567307 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:35.567321 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:35.567335 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:35.567351 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:35.567366 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-09 00:02:35.567380 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.567396 | orchestrator | } 2026-04-09 00:02:35.567411 | orchestrator | 2026-04-09 00:02:35.567426 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-09 00:02:35.567441 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-09 00:02:35.567457 | orchestrator | + content = (known after apply) 2026-04-09 00:02:35.567473 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:35.567489 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:35.567506 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:35.567541 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:35.567558 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:35.567573 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:35.567589 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:35.567605 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:35.567637 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-09 00:02:35.567656 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.567672 | orchestrator | } 2026-04-09 00:02:35.567709 | orchestrator | 2026-04-09 00:02:35.567736 | orchestrator | # local_file.inventory will be created 2026-04-09 00:02:35.567747 | orchestrator | + resource "local_file" "inventory" { 2026-04-09 00:02:35.567757 | orchestrator | + content = (known after apply) 2026-04-09 00:02:35.567766 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:35.567776 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:35.567785 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:35.567795 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:35.567806 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:35.567815 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:35.567825 | orchestrator | + directory_permission = "0777" 2026-04-09 00:02:35.567834 | orchestrator | + file_permission = "0644" 2026-04-09 00:02:35.567844 | orchestrator | + filename = "inventory.ci" 2026-04-09 00:02:35.567853 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.567863 | orchestrator | } 2026-04-09 00:02:35.567873 | orchestrator | 2026-04-09 00:02:35.567882 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-09 00:02:35.567892 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-09 00:02:35.567907 | orchestrator | + content = (sensitive value) 2026-04-09 00:02:35.567923 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-09 00:02:35.567939 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-09 00:02:35.567955 | orchestrator | + content_md5 = (known after apply) 2026-04-09 00:02:35.567970 | orchestrator | + content_sha1 = (known after apply) 2026-04-09 00:02:35.567986 | orchestrator | + content_sha256 = (known after apply) 2026-04-09 00:02:35.568003 | orchestrator | + content_sha512 = (known after apply) 2026-04-09 00:02:35.568019 | orchestrator | + directory_permission = "0700" 2026-04-09 00:02:35.568034 | orchestrator | + file_permission = "0600" 2026-04-09 00:02:35.568044 | orchestrator | + filename = ".id_rsa.ci" 2026-04-09 00:02:35.568054 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568064 | orchestrator | } 2026-04-09 00:02:35.568073 | orchestrator | 2026-04-09 00:02:35.568083 | orchestrator | # null_resource.node_semaphore will be created 2026-04-09 00:02:35.568092 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-09 00:02:35.568102 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568112 | orchestrator | } 2026-04-09 00:02:35.568121 | orchestrator | 2026-04-09 00:02:35.568131 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-09 00:02:35.568141 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-09 00:02:35.568151 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568160 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.568170 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568179 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.568189 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.568198 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-09 00:02:35.568208 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.568217 | orchestrator | + size = 80 2026-04-09 00:02:35.568227 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.568236 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.568245 | orchestrator | } 2026-04-09 00:02:35.568255 | orchestrator | 2026-04-09 00:02:35.568265 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-09 00:02:35.568275 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.568284 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568294 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.568304 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568322 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.568332 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.568342 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-09 00:02:35.568351 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.568361 | orchestrator | + size = 80 2026-04-09 00:02:35.568370 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.568380 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.568389 | orchestrator | } 2026-04-09 00:02:35.568399 | orchestrator | 2026-04-09 00:02:35.568409 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-09 00:02:35.568418 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.568428 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568437 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.568447 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568456 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.568466 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.568475 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-09 00:02:35.568485 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.568494 | orchestrator | + size = 80 2026-04-09 00:02:35.568504 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.568514 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.568553 | orchestrator | } 2026-04-09 00:02:35.568563 | orchestrator | 2026-04-09 00:02:35.568573 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-09 00:02:35.568582 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.568592 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568602 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.568611 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568621 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.568631 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.568648 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-09 00:02:35.568664 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.568680 | orchestrator | + size = 80 2026-04-09 00:02:35.568696 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.568712 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.568728 | orchestrator | } 2026-04-09 00:02:35.568743 | orchestrator | 2026-04-09 00:02:35.568758 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-09 00:02:35.568773 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.568790 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568807 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.568833 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.568848 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.568864 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.568889 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-09 00:02:35.568907 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.568922 | orchestrator | + size = 80 2026-04-09 00:02:35.568932 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.568941 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.568950 | orchestrator | } 2026-04-09 00:02:35.568960 | orchestrator | 2026-04-09 00:02:35.568970 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-09 00:02:35.568979 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.568989 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.568999 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569008 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569036 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.569046 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569055 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-09 00:02:35.569065 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569074 | orchestrator | + size = 80 2026-04-09 00:02:35.569084 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569093 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569103 | orchestrator | } 2026-04-09 00:02:35.569112 | orchestrator | 2026-04-09 00:02:35.569122 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-09 00:02:35.569131 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-09 00:02:35.569141 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569150 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569159 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569169 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.569178 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569188 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-09 00:02:35.569198 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569207 | orchestrator | + size = 80 2026-04-09 00:02:35.569216 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569226 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569235 | orchestrator | } 2026-04-09 00:02:35.569244 | orchestrator | 2026-04-09 00:02:35.569254 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-09 00:02:35.569264 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569274 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569283 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569293 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569302 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569312 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-09 00:02:35.569321 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569331 | orchestrator | + size = 20 2026-04-09 00:02:35.569340 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569350 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569359 | orchestrator | } 2026-04-09 00:02:35.569369 | orchestrator | 2026-04-09 00:02:35.569378 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-09 00:02:35.569388 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569397 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569407 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569416 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569426 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569435 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-09 00:02:35.569445 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569454 | orchestrator | + size = 20 2026-04-09 00:02:35.569464 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569473 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569482 | orchestrator | } 2026-04-09 00:02:35.569492 | orchestrator | 2026-04-09 00:02:35.569502 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-09 00:02:35.569512 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569553 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569563 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569573 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569582 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569592 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-09 00:02:35.569602 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569618 | orchestrator | + size = 20 2026-04-09 00:02:35.569628 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569637 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569647 | orchestrator | } 2026-04-09 00:02:35.569657 | orchestrator | 2026-04-09 00:02:35.569666 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-09 00:02:35.569676 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569685 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569695 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569705 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569714 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569724 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-09 00:02:35.569733 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569743 | orchestrator | + size = 20 2026-04-09 00:02:35.569752 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569762 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569772 | orchestrator | } 2026-04-09 00:02:35.569781 | orchestrator | 2026-04-09 00:02:35.569791 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-09 00:02:35.569801 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569810 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569820 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569829 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569839 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569855 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-09 00:02:35.569865 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.569879 | orchestrator | + size = 20 2026-04-09 00:02:35.569889 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.569899 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.569909 | orchestrator | } 2026-04-09 00:02:35.569918 | orchestrator | 2026-04-09 00:02:35.569928 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-09 00:02:35.569937 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.569947 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.569957 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.569966 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.569976 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.569985 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-09 00:02:35.569995 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.570004 | orchestrator | + size = 20 2026-04-09 00:02:35.570051 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.570064 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.570073 | orchestrator | } 2026-04-09 00:02:35.570083 | orchestrator | 2026-04-09 00:02:35.570093 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-09 00:02:35.570102 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.570112 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.570122 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.570131 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.570141 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.570150 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-09 00:02:35.570160 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.570169 | orchestrator | + size = 20 2026-04-09 00:02:35.570179 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.570188 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.570198 | orchestrator | } 2026-04-09 00:02:35.570208 | orchestrator | 2026-04-09 00:02:35.570217 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-09 00:02:35.570227 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.570243 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.570253 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.570262 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.570272 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.570282 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-09 00:02:35.570291 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.570301 | orchestrator | + size = 20 2026-04-09 00:02:35.570310 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.570320 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.570329 | orchestrator | } 2026-04-09 00:02:35.570339 | orchestrator | 2026-04-09 00:02:35.570349 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-09 00:02:35.570359 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-09 00:02:35.570368 | orchestrator | + attachment = (known after apply) 2026-04-09 00:02:35.570378 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.570387 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.570397 | orchestrator | + metadata = (known after apply) 2026-04-09 00:02:35.570407 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-09 00:02:35.570416 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.570426 | orchestrator | + size = 20 2026-04-09 00:02:35.570435 | orchestrator | + volume_retype_policy = "never" 2026-04-09 00:02:35.570445 | orchestrator | + volume_type = "ssd" 2026-04-09 00:02:35.570454 | orchestrator | } 2026-04-09 00:02:35.570464 | orchestrator | 2026-04-09 00:02:35.570474 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-09 00:02:35.570483 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-09 00:02:35.570493 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.570503 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.570512 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.570542 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.570552 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.570562 | orchestrator | + config_drive = true 2026-04-09 00:02:35.570571 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.570581 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.570590 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-09 00:02:35.570600 | orchestrator | + force_delete = false 2026-04-09 00:02:35.570609 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.570619 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.570629 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.570638 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.570648 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.570657 | orchestrator | + name = "testbed-manager" 2026-04-09 00:02:35.570666 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.570676 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.570686 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.570695 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.570705 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.570714 | orchestrator | + user_data = (sensitive value) 2026-04-09 00:02:35.570724 | orchestrator | 2026-04-09 00:02:35.570733 | orchestrator | + block_device { 2026-04-09 00:02:35.570743 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.570753 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.570767 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.570777 | orchestrator | + multiattach = false 2026-04-09 00:02:35.570786 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.570796 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.570811 | orchestrator | } 2026-04-09 00:02:35.570821 | orchestrator | 2026-04-09 00:02:35.570830 | orchestrator | + network { 2026-04-09 00:02:35.570840 | orchestrator | + access_network = false 2026-04-09 00:02:35.570850 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.570859 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.570869 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.570885 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.570895 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.570904 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.570914 | orchestrator | } 2026-04-09 00:02:35.570923 | orchestrator | } 2026-04-09 00:02:35.570933 | orchestrator | 2026-04-09 00:02:35.570943 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-09 00:02:35.570952 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.570962 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.570972 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.570981 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.570991 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.571000 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.571010 | orchestrator | + config_drive = true 2026-04-09 00:02:35.571026 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.571621 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.571684 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.571691 | orchestrator | + force_delete = false 2026-04-09 00:02:35.571696 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.571700 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.571704 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.571708 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.571712 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.571716 | orchestrator | + name = "testbed-node-0" 2026-04-09 00:02:35.571720 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.571723 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.571727 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.571731 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.571735 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.571739 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.571743 | orchestrator | 2026-04-09 00:02:35.571747 | orchestrator | + block_device { 2026-04-09 00:02:35.571751 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.571755 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.571759 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.571763 | orchestrator | + multiattach = false 2026-04-09 00:02:35.571766 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.571770 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.571775 | orchestrator | } 2026-04-09 00:02:35.571779 | orchestrator | 2026-04-09 00:02:35.571782 | orchestrator | + network { 2026-04-09 00:02:35.571786 | orchestrator | + access_network = false 2026-04-09 00:02:35.571790 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.571796 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.571800 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.571803 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.571807 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.571811 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.571814 | orchestrator | } 2026-04-09 00:02:35.571818 | orchestrator | } 2026-04-09 00:02:35.571822 | orchestrator | 2026-04-09 00:02:35.571826 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-09 00:02:35.571830 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.571833 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.571849 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.571853 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.571857 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.571860 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.571864 | orchestrator | + config_drive = true 2026-04-09 00:02:35.571868 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.571872 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.571875 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.571879 | orchestrator | + force_delete = false 2026-04-09 00:02:35.571883 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.571887 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.571890 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.571894 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.571898 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.571901 | orchestrator | + name = "testbed-node-1" 2026-04-09 00:02:35.571905 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.571909 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.571913 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.571917 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.571920 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.571924 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.571928 | orchestrator | 2026-04-09 00:02:35.571932 | orchestrator | + block_device { 2026-04-09 00:02:35.571936 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.571939 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.571943 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.571947 | orchestrator | + multiattach = false 2026-04-09 00:02:35.571951 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.571954 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.571958 | orchestrator | } 2026-04-09 00:02:35.571962 | orchestrator | 2026-04-09 00:02:35.571966 | orchestrator | + network { 2026-04-09 00:02:35.571969 | orchestrator | + access_network = false 2026-04-09 00:02:35.571973 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.571977 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.571981 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.571984 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.571988 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.571992 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.571996 | orchestrator | } 2026-04-09 00:02:35.572000 | orchestrator | } 2026-04-09 00:02:35.572003 | orchestrator | 2026-04-09 00:02:35.572007 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-09 00:02:35.572011 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.572015 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.572019 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.572025 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.572042 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.572051 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.572055 | orchestrator | + config_drive = true 2026-04-09 00:02:35.572059 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.572063 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.572066 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.572070 | orchestrator | + force_delete = false 2026-04-09 00:02:35.572074 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.572078 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572081 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.572089 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.572092 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.572096 | orchestrator | + name = "testbed-node-2" 2026-04-09 00:02:35.572100 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.572104 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572108 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.572111 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.572115 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.572119 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.572123 | orchestrator | 2026-04-09 00:02:35.572127 | orchestrator | + block_device { 2026-04-09 00:02:35.572130 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.572134 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.572138 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.572142 | orchestrator | + multiattach = false 2026-04-09 00:02:35.572146 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.572149 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572153 | orchestrator | } 2026-04-09 00:02:35.572157 | orchestrator | 2026-04-09 00:02:35.572161 | orchestrator | + network { 2026-04-09 00:02:35.572164 | orchestrator | + access_network = false 2026-04-09 00:02:35.572168 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.572172 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.572176 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.572180 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.572183 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.572187 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572191 | orchestrator | } 2026-04-09 00:02:35.572195 | orchestrator | } 2026-04-09 00:02:35.572199 | orchestrator | 2026-04-09 00:02:35.572203 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-09 00:02:35.572206 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.572210 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.572214 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.572218 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.572222 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.572226 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.572229 | orchestrator | + config_drive = true 2026-04-09 00:02:35.572233 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.572237 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.572241 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.572244 | orchestrator | + force_delete = false 2026-04-09 00:02:35.572248 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.572252 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572256 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.572260 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.572264 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.572267 | orchestrator | + name = "testbed-node-3" 2026-04-09 00:02:35.572271 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.572275 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572278 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.572282 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.572286 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.572290 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.572294 | orchestrator | 2026-04-09 00:02:35.572298 | orchestrator | + block_device { 2026-04-09 00:02:35.572304 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.572308 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.572312 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.572319 | orchestrator | + multiattach = false 2026-04-09 00:02:35.572323 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.572327 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572331 | orchestrator | } 2026-04-09 00:02:35.572335 | orchestrator | 2026-04-09 00:02:35.572338 | orchestrator | + network { 2026-04-09 00:02:35.572342 | orchestrator | + access_network = false 2026-04-09 00:02:35.572346 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.572350 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.572354 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.572357 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.572361 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.572365 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572369 | orchestrator | } 2026-04-09 00:02:35.572372 | orchestrator | } 2026-04-09 00:02:35.572376 | orchestrator | 2026-04-09 00:02:35.572380 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-09 00:02:35.572384 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.572388 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.572391 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.572395 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.572399 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.572403 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.572407 | orchestrator | + config_drive = true 2026-04-09 00:02:35.572410 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.572414 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.572418 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.572422 | orchestrator | + force_delete = false 2026-04-09 00:02:35.572426 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.572430 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572433 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.572440 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.572444 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.572448 | orchestrator | + name = "testbed-node-4" 2026-04-09 00:02:35.572452 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.572455 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572459 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.572463 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.572467 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.572470 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.572474 | orchestrator | 2026-04-09 00:02:35.572478 | orchestrator | + block_device { 2026-04-09 00:02:35.572482 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.572486 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.572489 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.572493 | orchestrator | + multiattach = false 2026-04-09 00:02:35.572497 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.572501 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572505 | orchestrator | } 2026-04-09 00:02:35.572509 | orchestrator | 2026-04-09 00:02:35.572512 | orchestrator | + network { 2026-04-09 00:02:35.572532 | orchestrator | + access_network = false 2026-04-09 00:02:35.572536 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.572540 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.572544 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.572547 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.572551 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.572555 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572559 | orchestrator | } 2026-04-09 00:02:35.572563 | orchestrator | } 2026-04-09 00:02:35.572570 | orchestrator | 2026-04-09 00:02:35.572574 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-09 00:02:35.572577 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-09 00:02:35.572581 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-09 00:02:35.572585 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-09 00:02:35.572589 | orchestrator | + all_metadata = (known after apply) 2026-04-09 00:02:35.572592 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.572596 | orchestrator | + availability_zone = "nova" 2026-04-09 00:02:35.572600 | orchestrator | + config_drive = true 2026-04-09 00:02:35.572604 | orchestrator | + created = (known after apply) 2026-04-09 00:02:35.572607 | orchestrator | + flavor_id = (known after apply) 2026-04-09 00:02:35.572611 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-09 00:02:35.572615 | orchestrator | + force_delete = false 2026-04-09 00:02:35.572623 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-09 00:02:35.572627 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572631 | orchestrator | + image_id = (known after apply) 2026-04-09 00:02:35.572635 | orchestrator | + image_name = (known after apply) 2026-04-09 00:02:35.572638 | orchestrator | + key_pair = "testbed" 2026-04-09 00:02:35.572642 | orchestrator | + name = "testbed-node-5" 2026-04-09 00:02:35.572646 | orchestrator | + power_state = "active" 2026-04-09 00:02:35.572650 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572653 | orchestrator | + security_groups = (known after apply) 2026-04-09 00:02:35.572657 | orchestrator | + stop_before_destroy = false 2026-04-09 00:02:35.572661 | orchestrator | + updated = (known after apply) 2026-04-09 00:02:35.572665 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-09 00:02:35.572669 | orchestrator | 2026-04-09 00:02:35.572672 | orchestrator | + block_device { 2026-04-09 00:02:35.572676 | orchestrator | + boot_index = 0 2026-04-09 00:02:35.572680 | orchestrator | + delete_on_termination = false 2026-04-09 00:02:35.572684 | orchestrator | + destination_type = "volume" 2026-04-09 00:02:35.572687 | orchestrator | + multiattach = false 2026-04-09 00:02:35.572691 | orchestrator | + source_type = "volume" 2026-04-09 00:02:35.572695 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572699 | orchestrator | } 2026-04-09 00:02:35.572702 | orchestrator | 2026-04-09 00:02:35.572706 | orchestrator | + network { 2026-04-09 00:02:35.572710 | orchestrator | + access_network = false 2026-04-09 00:02:35.572714 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-09 00:02:35.572718 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-09 00:02:35.572721 | orchestrator | + mac = (known after apply) 2026-04-09 00:02:35.572725 | orchestrator | + name = (known after apply) 2026-04-09 00:02:35.572729 | orchestrator | + port = (known after apply) 2026-04-09 00:02:35.572733 | orchestrator | + uuid = (known after apply) 2026-04-09 00:02:35.572737 | orchestrator | } 2026-04-09 00:02:35.572740 | orchestrator | } 2026-04-09 00:02:35.572744 | orchestrator | 2026-04-09 00:02:35.572748 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-09 00:02:35.572752 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-09 00:02:35.572756 | orchestrator | + fingerprint = (known after apply) 2026-04-09 00:02:35.572759 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572763 | orchestrator | + name = "testbed" 2026-04-09 00:02:35.572767 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:35.572771 | orchestrator | + public_key = (known after apply) 2026-04-09 00:02:35.572774 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572778 | orchestrator | + user_id = (known after apply) 2026-04-09 00:02:35.572782 | orchestrator | } 2026-04-09 00:02:35.572786 | orchestrator | 2026-04-09 00:02:35.572790 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-09 00:02:35.572794 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572801 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572805 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572809 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572813 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572816 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572820 | orchestrator | } 2026-04-09 00:02:35.572824 | orchestrator | 2026-04-09 00:02:35.572828 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-09 00:02:35.572831 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572835 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572839 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572843 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572847 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572850 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572854 | orchestrator | } 2026-04-09 00:02:35.572858 | orchestrator | 2026-04-09 00:02:35.572864 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-09 00:02:35.572868 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572872 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572876 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572880 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572883 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572887 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572891 | orchestrator | } 2026-04-09 00:02:35.572895 | orchestrator | 2026-04-09 00:02:35.572899 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-09 00:02:35.572903 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572906 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572910 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572914 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572918 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572922 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572925 | orchestrator | } 2026-04-09 00:02:35.572929 | orchestrator | 2026-04-09 00:02:35.572933 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-09 00:02:35.572937 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572941 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572944 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572948 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572954 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572958 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572962 | orchestrator | } 2026-04-09 00:02:35.572966 | orchestrator | 2026-04-09 00:02:35.572970 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-09 00:02:35.572973 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.572977 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.572981 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.572985 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.572989 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.572992 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.572996 | orchestrator | } 2026-04-09 00:02:35.573000 | orchestrator | 2026-04-09 00:02:35.573004 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-09 00:02:35.573008 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.573011 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.573015 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573019 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.573023 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573029 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.573033 | orchestrator | } 2026-04-09 00:02:35.573037 | orchestrator | 2026-04-09 00:02:35.573041 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-09 00:02:35.573045 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.573049 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.573052 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573056 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.573060 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573064 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.573067 | orchestrator | } 2026-04-09 00:02:35.573071 | orchestrator | 2026-04-09 00:02:35.573075 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-09 00:02:35.573079 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-09 00:02:35.573083 | orchestrator | + device = (known after apply) 2026-04-09 00:02:35.573087 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573090 | orchestrator | + instance_id = (known after apply) 2026-04-09 00:02:35.573094 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573098 | orchestrator | + volume_id = (known after apply) 2026-04-09 00:02:35.573102 | orchestrator | } 2026-04-09 00:02:35.573105 | orchestrator | 2026-04-09 00:02:35.573109 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-09 00:02:35.573114 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-09 00:02:35.573117 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:35.573121 | orchestrator | + floating_ip = (known after apply) 2026-04-09 00:02:35.573125 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573129 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:35.573132 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573136 | orchestrator | } 2026-04-09 00:02:35.573140 | orchestrator | 2026-04-09 00:02:35.573144 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-09 00:02:35.573148 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-09 00:02:35.573151 | orchestrator | + address = (known after apply) 2026-04-09 00:02:35.573155 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573159 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:35.573163 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573166 | orchestrator | + fixed_ip = (known after apply) 2026-04-09 00:02:35.573170 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573174 | orchestrator | + pool = "public" 2026-04-09 00:02:35.573178 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:35.573182 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573185 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573189 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573193 | orchestrator | } 2026-04-09 00:02:35.573197 | orchestrator | 2026-04-09 00:02:35.573201 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-09 00:02:35.573204 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-09 00:02:35.573208 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573212 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573216 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:35.573220 | orchestrator | + "nova", 2026-04-09 00:02:35.573223 | orchestrator | ] 2026-04-09 00:02:35.573227 | orchestrator | + dns_domain = (known after apply) 2026-04-09 00:02:35.573231 | orchestrator | + external = (known after apply) 2026-04-09 00:02:35.573235 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573241 | orchestrator | + mtu = (known after apply) 2026-04-09 00:02:35.573245 | orchestrator | + name = "net-testbed-management" 2026-04-09 00:02:35.573249 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573255 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573259 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573263 | orchestrator | + shared = (known after apply) 2026-04-09 00:02:35.573267 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573270 | orchestrator | + transparent_vlan = (known after apply) 2026-04-09 00:02:35.573274 | orchestrator | 2026-04-09 00:02:35.573278 | orchestrator | + segments (known after apply) 2026-04-09 00:02:35.573282 | orchestrator | } 2026-04-09 00:02:35.573286 | orchestrator | 2026-04-09 00:02:35.573289 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-09 00:02:35.573293 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-09 00:02:35.573297 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573301 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.573305 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.573311 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573315 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.573318 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.573322 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.573326 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573329 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573333 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.573337 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.573341 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573344 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573348 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573352 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.573356 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573359 | orchestrator | 2026-04-09 00:02:35.573363 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573367 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.573371 | orchestrator | } 2026-04-09 00:02:35.573375 | orchestrator | 2026-04-09 00:02:35.573378 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.573382 | orchestrator | 2026-04-09 00:02:35.573386 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.573390 | orchestrator | + ip_address = "192.168.16.5" 2026-04-09 00:02:35.573394 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573398 | orchestrator | } 2026-04-09 00:02:35.573401 | orchestrator | } 2026-04-09 00:02:35.573405 | orchestrator | 2026-04-09 00:02:35.573409 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-09 00:02:35.573413 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.573417 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573420 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.573424 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.573428 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573432 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.573435 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.573439 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.573443 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573446 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573450 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.573454 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.573458 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573461 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573465 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573471 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.573475 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573479 | orchestrator | 2026-04-09 00:02:35.573483 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573486 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.573490 | orchestrator | } 2026-04-09 00:02:35.573494 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573498 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.573502 | orchestrator | } 2026-04-09 00:02:35.573505 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573509 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.573513 | orchestrator | } 2026-04-09 00:02:35.573527 | orchestrator | 2026-04-09 00:02:35.573531 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.573535 | orchestrator | 2026-04-09 00:02:35.573539 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.573543 | orchestrator | + ip_address = "192.168.16.10" 2026-04-09 00:02:35.573547 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573550 | orchestrator | } 2026-04-09 00:02:35.573554 | orchestrator | } 2026-04-09 00:02:35.573558 | orchestrator | 2026-04-09 00:02:35.573562 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-09 00:02:35.573565 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.573569 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573573 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.573577 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.573581 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573584 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.573588 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.573592 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.573596 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573599 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573603 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.573607 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.573611 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573615 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573618 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573622 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.573626 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573629 | orchestrator | 2026-04-09 00:02:35.573637 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573641 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.573645 | orchestrator | } 2026-04-09 00:02:35.573649 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573653 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.573656 | orchestrator | } 2026-04-09 00:02:35.573660 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573664 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.573668 | orchestrator | } 2026-04-09 00:02:35.573671 | orchestrator | 2026-04-09 00:02:35.573675 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.573679 | orchestrator | 2026-04-09 00:02:35.573683 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.573687 | orchestrator | + ip_address = "192.168.16.11" 2026-04-09 00:02:35.573690 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573694 | orchestrator | } 2026-04-09 00:02:35.573698 | orchestrator | } 2026-04-09 00:02:35.573702 | orchestrator | 2026-04-09 00:02:35.573706 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-09 00:02:35.573709 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.573713 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573717 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.573721 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.573725 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573731 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.573735 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.573739 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.573743 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573749 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573753 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.573757 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.573761 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573765 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573769 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573772 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.573776 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573780 | orchestrator | 2026-04-09 00:02:35.573784 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573787 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.573791 | orchestrator | } 2026-04-09 00:02:35.573795 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573799 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.573803 | orchestrator | } 2026-04-09 00:02:35.573806 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573810 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.573814 | orchestrator | } 2026-04-09 00:02:35.573818 | orchestrator | 2026-04-09 00:02:35.573821 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.573825 | orchestrator | 2026-04-09 00:02:35.573829 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.573833 | orchestrator | + ip_address = "192.168.16.12" 2026-04-09 00:02:35.573836 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573840 | orchestrator | } 2026-04-09 00:02:35.573844 | orchestrator | } 2026-04-09 00:02:35.573848 | orchestrator | 2026-04-09 00:02:35.573851 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-09 00:02:35.573855 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.573859 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.573863 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.573866 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.573870 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.573874 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.573878 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.573882 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.573885 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.573889 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.573893 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.573897 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.573900 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.573904 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.573908 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.573912 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.573915 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.573919 | orchestrator | 2026-04-09 00:02:35.573923 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573927 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.573931 | orchestrator | } 2026-04-09 00:02:35.573934 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573938 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.573942 | orchestrator | } 2026-04-09 00:02:35.573946 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.573949 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.573953 | orchestrator | } 2026-04-09 00:02:35.573957 | orchestrator | 2026-04-09 00:02:35.573964 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.573967 | orchestrator | 2026-04-09 00:02:35.573971 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.573975 | orchestrator | + ip_address = "192.168.16.13" 2026-04-09 00:02:35.573979 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.573983 | orchestrator | } 2026-04-09 00:02:35.573986 | orchestrator | } 2026-04-09 00:02:35.573990 | orchestrator | 2026-04-09 00:02:35.573994 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-09 00:02:35.573998 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.574002 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.574005 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.574009 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.574158 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.574168 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.574171 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.574175 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.574179 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.574183 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574186 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.574190 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.574194 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.574198 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.574205 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574209 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.574213 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574217 | orchestrator | 2026-04-09 00:02:35.574221 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574225 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.574228 | orchestrator | } 2026-04-09 00:02:35.574232 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574236 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.574239 | orchestrator | } 2026-04-09 00:02:35.574243 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574247 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.574250 | orchestrator | } 2026-04-09 00:02:35.574254 | orchestrator | 2026-04-09 00:02:35.574258 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.574262 | orchestrator | 2026-04-09 00:02:35.574265 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.574269 | orchestrator | + ip_address = "192.168.16.14" 2026-04-09 00:02:35.574273 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.574277 | orchestrator | } 2026-04-09 00:02:35.574280 | orchestrator | } 2026-04-09 00:02:35.574284 | orchestrator | 2026-04-09 00:02:35.574288 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-09 00:02:35.574291 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-09 00:02:35.574295 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.574299 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-09 00:02:35.574303 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-09 00:02:35.574306 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.574310 | orchestrator | + device_id = (known after apply) 2026-04-09 00:02:35.574314 | orchestrator | + device_owner = (known after apply) 2026-04-09 00:02:35.574317 | orchestrator | + dns_assignment = (known after apply) 2026-04-09 00:02:35.574321 | orchestrator | + dns_name = (known after apply) 2026-04-09 00:02:35.574325 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574328 | orchestrator | + mac_address = (known after apply) 2026-04-09 00:02:35.574332 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.574336 | orchestrator | + port_security_enabled = (known after apply) 2026-04-09 00:02:35.574340 | orchestrator | + qos_policy_id = (known after apply) 2026-04-09 00:02:35.574347 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574351 | orchestrator | + security_group_ids = (known after apply) 2026-04-09 00:02:35.574355 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574358 | orchestrator | 2026-04-09 00:02:35.574362 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574366 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-09 00:02:35.574369 | orchestrator | } 2026-04-09 00:02:35.574373 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574377 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-09 00:02:35.574381 | orchestrator | } 2026-04-09 00:02:35.574384 | orchestrator | + allowed_address_pairs { 2026-04-09 00:02:35.574388 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-09 00:02:35.574392 | orchestrator | } 2026-04-09 00:02:35.574396 | orchestrator | 2026-04-09 00:02:35.574405 | orchestrator | + binding (known after apply) 2026-04-09 00:02:35.574409 | orchestrator | 2026-04-09 00:02:35.574413 | orchestrator | + fixed_ip { 2026-04-09 00:02:35.574416 | orchestrator | + ip_address = "192.168.16.15" 2026-04-09 00:02:35.574420 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.574424 | orchestrator | } 2026-04-09 00:02:35.574428 | orchestrator | } 2026-04-09 00:02:35.574431 | orchestrator | 2026-04-09 00:02:35.574435 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-09 00:02:35.574439 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-09 00:02:35.574442 | orchestrator | + force_destroy = false 2026-04-09 00:02:35.574446 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574450 | orchestrator | + port_id = (known after apply) 2026-04-09 00:02:35.574454 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574457 | orchestrator | + router_id = (known after apply) 2026-04-09 00:02:35.574461 | orchestrator | + subnet_id = (known after apply) 2026-04-09 00:02:35.574465 | orchestrator | } 2026-04-09 00:02:35.574469 | orchestrator | 2026-04-09 00:02:35.574472 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-09 00:02:35.574476 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-09 00:02:35.574480 | orchestrator | + admin_state_up = (known after apply) 2026-04-09 00:02:35.574483 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.574487 | orchestrator | + availability_zone_hints = [ 2026-04-09 00:02:35.574491 | orchestrator | + "nova", 2026-04-09 00:02:35.574495 | orchestrator | ] 2026-04-09 00:02:35.574498 | orchestrator | + distributed = (known after apply) 2026-04-09 00:02:35.574502 | orchestrator | + enable_snat = (known after apply) 2026-04-09 00:02:35.574506 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-09 00:02:35.574510 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-09 00:02:35.574513 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574548 | orchestrator | + name = "testbed" 2026-04-09 00:02:35.574552 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574556 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574560 | orchestrator | 2026-04-09 00:02:35.574564 | orchestrator | + external_fixed_ip (known after apply) 2026-04-09 00:02:35.574568 | orchestrator | } 2026-04-09 00:02:35.574572 | orchestrator | 2026-04-09 00:02:35.574576 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-09 00:02:35.574580 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-09 00:02:35.574584 | orchestrator | + description = "ssh" 2026-04-09 00:02:35.574588 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574591 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574595 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574599 | orchestrator | + port_range_max = 22 2026-04-09 00:02:35.574603 | orchestrator | + port_range_min = 22 2026-04-09 00:02:35.574606 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:35.574610 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574617 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574621 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574625 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.574629 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574633 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574636 | orchestrator | } 2026-04-09 00:02:35.574640 | orchestrator | 2026-04-09 00:02:35.574644 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-09 00:02:35.574650 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-09 00:02:35.574654 | orchestrator | + description = "wireguard" 2026-04-09 00:02:35.574658 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574662 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574666 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574669 | orchestrator | + port_range_max = 51820 2026-04-09 00:02:35.574673 | orchestrator | + port_range_min = 51820 2026-04-09 00:02:35.574677 | orchestrator | + protocol = "udp" 2026-04-09 00:02:35.574680 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574684 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574688 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574692 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.574695 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574699 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574703 | orchestrator | } 2026-04-09 00:02:35.574707 | orchestrator | 2026-04-09 00:02:35.574710 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-09 00:02:35.574714 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-09 00:02:35.574718 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574722 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574726 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574729 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:35.574733 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574737 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574741 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574745 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:35.574748 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574752 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574756 | orchestrator | } 2026-04-09 00:02:35.574760 | orchestrator | 2026-04-09 00:02:35.574763 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-09 00:02:35.574767 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-09 00:02:35.574771 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574775 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574778 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574782 | orchestrator | + protocol = "udp" 2026-04-09 00:02:35.574786 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574790 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574793 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574797 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-09 00:02:35.574801 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574805 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574809 | orchestrator | } 2026-04-09 00:02:35.574812 | orchestrator | 2026-04-09 00:02:35.574816 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-09 00:02:35.574823 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-09 00:02:35.574827 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574831 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574835 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574839 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:35.574842 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574846 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574850 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574854 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.574857 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574861 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574865 | orchestrator | } 2026-04-09 00:02:35.574869 | orchestrator | 2026-04-09 00:02:35.574873 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-09 00:02:35.574876 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-09 00:02:35.574880 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574884 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574888 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574891 | orchestrator | + protocol = "tcp" 2026-04-09 00:02:35.574895 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574899 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574905 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574909 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.574913 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574917 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574921 | orchestrator | } 2026-04-09 00:02:35.574924 | orchestrator | 2026-04-09 00:02:35.574928 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-09 00:02:35.574932 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-09 00:02:35.574936 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574939 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.574943 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.574947 | orchestrator | + protocol = "udp" 2026-04-09 00:02:35.574950 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.574954 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.574958 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.574962 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.574966 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.574970 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.574974 | orchestrator | } 2026-04-09 00:02:35.574977 | orchestrator | 2026-04-09 00:02:35.574984 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-09 00:02:35.574988 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-09 00:02:35.574992 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.574999 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.575003 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575007 | orchestrator | + protocol = "icmp" 2026-04-09 00:02:35.575011 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.575014 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.575018 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.575022 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.575026 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.575029 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.575036 | orchestrator | } 2026-04-09 00:02:35.575040 | orchestrator | 2026-04-09 00:02:35.575044 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-09 00:02:35.575048 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-09 00:02:35.575051 | orchestrator | + description = "vrrp" 2026-04-09 00:02:35.575055 | orchestrator | + direction = "ingress" 2026-04-09 00:02:35.575059 | orchestrator | + ethertype = "IPv4" 2026-04-09 00:02:35.575063 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575067 | orchestrator | + protocol = "112" 2026-04-09 00:02:35.575070 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.575074 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-09 00:02:35.575078 | orchestrator | + remote_group_id = (known after apply) 2026-04-09 00:02:35.575082 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-09 00:02:35.575086 | orchestrator | + security_group_id = (known after apply) 2026-04-09 00:02:35.575090 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.575093 | orchestrator | } 2026-04-09 00:02:35.575097 | orchestrator | 2026-04-09 00:02:35.575101 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-09 00:02:35.575105 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-09 00:02:35.575109 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.575112 | orchestrator | + description = "management security group" 2026-04-09 00:02:35.575116 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575120 | orchestrator | + name = "testbed-management" 2026-04-09 00:02:35.575124 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.575128 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:35.575131 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.575135 | orchestrator | } 2026-04-09 00:02:35.575139 | orchestrator | 2026-04-09 00:02:35.575143 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-09 00:02:35.575147 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-09 00:02:35.575151 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.575155 | orchestrator | + description = "node security group" 2026-04-09 00:02:35.575158 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575162 | orchestrator | + name = "testbed-node" 2026-04-09 00:02:35.575166 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.575170 | orchestrator | + stateful = (known after apply) 2026-04-09 00:02:35.575174 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.575177 | orchestrator | } 2026-04-09 00:02:35.575181 | orchestrator | 2026-04-09 00:02:35.575185 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-09 00:02:35.575189 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-09 00:02:35.575193 | orchestrator | + all_tags = (known after apply) 2026-04-09 00:02:35.575197 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-09 00:02:35.575200 | orchestrator | + dns_nameservers = [ 2026-04-09 00:02:35.575204 | orchestrator | + "8.8.8.8", 2026-04-09 00:02:35.575208 | orchestrator | + "9.9.9.9", 2026-04-09 00:02:35.575212 | orchestrator | ] 2026-04-09 00:02:35.575216 | orchestrator | + enable_dhcp = true 2026-04-09 00:02:35.575220 | orchestrator | + gateway_ip = (known after apply) 2026-04-09 00:02:35.575223 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575227 | orchestrator | + ip_version = 4 2026-04-09 00:02:35.575231 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-09 00:02:35.575235 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-09 00:02:35.575239 | orchestrator | + name = "subnet-testbed-management" 2026-04-09 00:02:35.575242 | orchestrator | + network_id = (known after apply) 2026-04-09 00:02:35.575246 | orchestrator | + no_gateway = false 2026-04-09 00:02:35.575250 | orchestrator | + region = (known after apply) 2026-04-09 00:02:35.575254 | orchestrator | + service_types = (known after apply) 2026-04-09 00:02:35.575261 | orchestrator | + tenant_id = (known after apply) 2026-04-09 00:02:35.575265 | orchestrator | 2026-04-09 00:02:35.575269 | orchestrator | + allocation_pool { 2026-04-09 00:02:35.575272 | orchestrator | + end = "192.168.31.250" 2026-04-09 00:02:35.575276 | orchestrator | + start = "192.168.31.200" 2026-04-09 00:02:35.575280 | orchestrator | } 2026-04-09 00:02:35.575284 | orchestrator | } 2026-04-09 00:02:35.575288 | orchestrator | 2026-04-09 00:02:35.575292 | orchestrator | # terraform_data.image will be created 2026-04-09 00:02:35.575295 | orchestrator | + resource "terraform_data" "image" { 2026-04-09 00:02:35.575299 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575303 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:35.575307 | orchestrator | + output = (known after apply) 2026-04-09 00:02:35.575310 | orchestrator | } 2026-04-09 00:02:35.575314 | orchestrator | 2026-04-09 00:02:35.575318 | orchestrator | # terraform_data.image_node will be created 2026-04-09 00:02:35.575322 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-09 00:02:35.575326 | orchestrator | + id = (known after apply) 2026-04-09 00:02:35.575329 | orchestrator | + input = "Ubuntu 24.04" 2026-04-09 00:02:35.575333 | orchestrator | + output = (known after apply) 2026-04-09 00:02:35.575337 | orchestrator | } 2026-04-09 00:02:35.575341 | orchestrator | 2026-04-09 00:02:35.575344 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-09 00:02:35.575348 | orchestrator | 2026-04-09 00:02:35.575352 | orchestrator | Changes to Outputs: 2026-04-09 00:02:35.575356 | orchestrator | + manager_address = (sensitive value) 2026-04-09 00:02:35.575360 | orchestrator | + private_key = (sensitive value) 2026-04-09 00:02:35.831932 | orchestrator | terraform_data.image_node: Creating... 2026-04-09 00:02:35.832003 | orchestrator | terraform_data.image: Creating... 2026-04-09 00:02:35.832016 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=1333971c-e59a-7e4b-e1a4-ef28bdb2b726] 2026-04-09 00:02:35.832026 | orchestrator | terraform_data.image: Creation complete after 0s [id=99fc4d41-b6ff-cc61-24e5-8b5dc805e797] 2026-04-09 00:02:35.848577 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-09 00:02:35.850067 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-09 00:02:35.863688 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-09 00:02:35.864067 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-09 00:02:35.864792 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-09 00:02:35.864982 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-09 00:02:35.865156 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-09 00:02:35.866366 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-09 00:02:35.869589 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-09 00:02:35.874156 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-09 00:02:36.344236 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:36.352554 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-09 00:02:36.363485 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-09 00:02:36.367693 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-09 00:02:36.417784 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-09 00:02:36.426822 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-09 00:02:37.008106 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=36d03ac5-0eae-4b14-8419-678b860bbdb5] 2026-04-09 00:02:37.017775 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-09 00:02:39.576952 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=de323fae-e08c-44ab-9f5d-e0649991af02] 2026-04-09 00:02:39.584177 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-09 00:02:39.587805 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=7d3f3539-bcc0-40e2-bb47-88465426d961] 2026-04-09 00:02:39.600188 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-09 00:02:39.604738 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=4915a96f-c727-49cd-8e71-365065423554] 2026-04-09 00:02:39.606947 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6] 2026-04-09 00:02:39.623469 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-09 00:02:39.628826 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-09 00:02:39.631935 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=4037060ebc7f6bb386a6d4aa73e6e3d664e4a1e6] 2026-04-09 00:02:39.641047 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-09 00:02:39.647031 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0b251025f2f6d5d2b2a92931dea3785e87985be6] 2026-04-09 00:02:39.651703 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-09 00:02:39.655929 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=1117e366-620b-4195-b3cd-cb9d1ba2563b] 2026-04-09 00:02:39.664780 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-09 00:02:39.678730 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=a2730516-0b41-4086-99de-bfe7a2602e3b] 2026-04-09 00:02:39.684332 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-09 00:02:39.712053 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=b113bf69-5b2f-465f-b4d6-8ed3709e703c] 2026-04-09 00:02:39.719583 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-09 00:02:39.735946 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=78a0dd59-f7ff-4f21-9079-dceaea0538fa] 2026-04-09 00:02:39.755973 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=cc2e9d6e-928c-46c6-aaaa-26c6da7e313f] 2026-04-09 00:02:40.422073 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=054d6a4f-ae79-440d-aa30-cec1ade3ccaa] 2026-04-09 00:02:41.082399 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=10b13542-f896-43d0-be4c-49d38f7214df] 2026-04-09 00:02:41.082472 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-09 00:02:43.066146 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1ca5d9af-c9b0-4634-80a3-044251651961] 2026-04-09 00:02:43.100778 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=5052af24-97ad-428a-a556-7be1e7d9033f] 2026-04-09 00:02:43.147873 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=74b5ef9f-7038-474f-83c8-72643aabc9bd] 2026-04-09 00:02:43.180986 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=bbe1802f-7171-48ed-9202-61a04dc54e1c] 2026-04-09 00:02:43.208177 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=3a6d3317-2b94-4d3e-96ca-e5381511ebbc] 2026-04-09 00:02:43.282568 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=3cfb3d4b-a336-425e-b827-5a144578e3d1] 2026-04-09 00:02:45.223551 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=1b0df1b8-a43c-45cc-b171-da54d151e012] 2026-04-09 00:02:45.230523 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-09 00:02:45.231653 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-09 00:02:45.234420 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-09 00:02:45.580150 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=11c55ab8-4339-4ed8-8bef-7443b6a8c74a] 2026-04-09 00:02:45.588686 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-09 00:02:45.591486 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-09 00:02:45.594285 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-09 00:02:45.595774 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-09 00:02:45.595966 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-09 00:02:45.596657 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-09 00:02:45.649415 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=f5b1c696-de75-4b86-883f-9968fda3a1e4] 2026-04-09 00:02:45.655312 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-09 00:02:45.657692 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-09 00:02:45.657735 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-09 00:02:45.872254 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=77acf3fd-48f8-43a5-a089-97c918b98322] 2026-04-09 00:02:45.884106 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-09 00:02:45.938801 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=15a161e4-f60d-448b-8c21-590f93a1d675] 2026-04-09 00:02:45.949726 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-09 00:02:46.058819 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=144226ef-653f-4d72-9e4d-3782c556d64c] 2026-04-09 00:02:46.065201 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-09 00:02:46.324891 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a2307e77-2da9-415d-a44a-ffdf4649e03d] 2026-04-09 00:02:46.336841 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-09 00:02:46.402187 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ee767d6d-d21f-4481-a0c2-0a0702c81e65] 2026-04-09 00:02:46.423511 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-09 00:02:46.704524 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=11bc3980-4139-4066-8a03-6deaa73fc812] 2026-04-09 00:02:46.720161 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-09 00:02:46.930584 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=92b5e1c3-de48-4577-a284-54bc36665c1a] 2026-04-09 00:02:46.942210 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=d2c85f1b-534a-47f9-b85a-b547b7f24981] 2026-04-09 00:02:46.942631 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-09 00:02:46.980648 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=6faa6520-7ba1-420f-9a8d-b2dd5bed4d65] 2026-04-09 00:02:47.060760 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=24fc093e-7cbb-4e5c-9e56-50f93a620f5c] 2026-04-09 00:02:47.082089 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3efebdbb-2774-416a-aa55-07572cd33f59] 2026-04-09 00:02:47.163750 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=55c83c3f-bb6e-4c1f-b845-5716a9a75a6f] 2026-04-09 00:02:47.189075 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8b8793e6-6ab5-462c-b8b7-426706a7b48a] 2026-04-09 00:02:47.543580 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=26d25791-9094-41aa-9a00-4e9b02ae46fc] 2026-04-09 00:02:47.620718 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=8c1f28e0-6df0-45d7-93a1-d5e899179ec6] 2026-04-09 00:02:48.596493 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 3s [id=1c90fada-8d60-40a0-b20e-140df2ae132d] 2026-04-09 00:02:49.460306 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=2b93c39f-7148-4979-8ccc-7616b5906118] 2026-04-09 00:02:49.495004 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-09 00:02:49.495080 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-09 00:02:49.496308 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-09 00:02:49.503911 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-09 00:02:49.507886 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-09 00:02:49.508030 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-09 00:02:49.527138 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-09 00:02:51.462155 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=4c6fcfd4-b186-4d22-baa9-fb3ca759d0c4] 2026-04-09 00:02:51.470864 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-09 00:02:51.476598 | orchestrator | local_file.inventory: Creating... 2026-04-09 00:02:51.478943 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-09 00:02:51.486398 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=97ba8698c720f17f11849d494062bb98d9351edd] 2026-04-09 00:02:51.488328 | orchestrator | local_file.inventory: Creation complete after 0s [id=25c8ca303d2d4f7810fcbc6d04cfade0b41bdc81] 2026-04-09 00:02:52.383844 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4c6fcfd4-b186-4d22-baa9-fb3ca759d0c4] 2026-04-09 00:02:59.498770 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-09 00:02:59.507988 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [11s elapsed] 2026-04-09 00:02:59.509195 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-09 00:02:59.509255 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-09 00:02:59.509264 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-09 00:02:59.528403 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-09 00:03:09.506287 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [21s elapsed] 2026-04-09 00:03:09.508703 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [21s elapsed] 2026-04-09 00:03:09.509974 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-09 00:03:09.510047 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-09 00:03:09.510064 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-09 00:03:09.529299 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-09 00:03:10.374498 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=a500b412-4170-4023-a940-eb180d49185e] 2026-04-09 00:03:19.514880 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-09 00:03:19.514966 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [31s elapsed] 2026-04-09 00:03:19.514973 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-09 00:03:19.514986 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-09 00:03:19.530333 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-09 00:03:29.524062 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-09 00:03:29.524159 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-09 00:03:29.524169 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-09 00:03:29.524176 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [41s elapsed] 2026-04-09 00:03:29.531402 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-09 00:03:39.530298 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [51s elapsed] 2026-04-09 00:03:39.530381 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-09 00:03:39.530393 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-09 00:03:39.530412 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-04-09 00:03:39.531554 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-04-09 00:03:49.538303 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-04-09 00:03:49.538395 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m1s elapsed] 2026-04-09 00:03:49.538407 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-04-09 00:03:49.538413 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-04-09 00:03:49.538430 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-04-09 00:03:50.506819 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m2s [id=e363ecf9-ea15-4613-8c04-1013f3b4e823] 2026-04-09 00:03:50.604915 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=ba222885-9361-4756-ab2f-38f383e2ef3a] 2026-04-09 00:03:50.646417 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=21ab975f-a597-4c68-b19c-585d03403503] 2026-04-09 00:03:50.665576 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=de5d81e2-b11b-4030-95ba-9e813421d1b8] 2026-04-09 00:03:50.766999 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m1s [id=43cb1a37-1859-428f-a1e5-cd0071e06f74] 2026-04-09 00:03:50.814475 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-09 00:03:50.818065 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-09 00:03:50.818258 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6958458592837557334] 2026-04-09 00:03:50.821730 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-09 00:03:50.823997 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-09 00:03:50.824522 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-09 00:03:50.847287 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-09 00:03:50.859731 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-09 00:03:50.870309 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-09 00:03:50.892954 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-09 00:03:50.896512 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-09 00:03:50.911163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-09 00:03:54.252118 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=ba222885-9361-4756-ab2f-38f383e2ef3a/0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6] 2026-04-09 00:03:54.263133 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=21ab975f-a597-4c68-b19c-585d03403503/78a0dd59-f7ff-4f21-9079-dceaea0538fa] 2026-04-09 00:03:54.285980 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=a500b412-4170-4023-a940-eb180d49185e/b113bf69-5b2f-465f-b4d6-8ed3709e703c] 2026-04-09 00:03:54.292442 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=ba222885-9361-4756-ab2f-38f383e2ef3a/de323fae-e08c-44ab-9f5d-e0649991af02] 2026-04-09 00:03:54.313233 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=a500b412-4170-4023-a940-eb180d49185e/cc2e9d6e-928c-46c6-aaaa-26c6da7e313f] 2026-04-09 00:03:54.324122 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=21ab975f-a597-4c68-b19c-585d03403503/7d3f3539-bcc0-40e2-bb47-88465426d961] 2026-04-09 00:04:00.387324 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=21ab975f-a597-4c68-b19c-585d03403503/a2730516-0b41-4086-99de-bfe7a2602e3b] 2026-04-09 00:04:00.396495 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=ba222885-9361-4756-ab2f-38f383e2ef3a/4915a96f-c727-49cd-8e71-365065423554] 2026-04-09 00:04:00.426273 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=a500b412-4170-4023-a940-eb180d49185e/1117e366-620b-4195-b3cd-cb9d1ba2563b] 2026-04-09 00:04:00.889163 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-09 00:04:10.889675 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-09 00:04:11.420362 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=4a89fa4d-fc7e-4f16-a87f-fefc30fbb2ac] 2026-04-09 00:04:11.439632 | orchestrator | 2026-04-09 00:04:11.439711 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-09 00:04:11.439718 | orchestrator | 2026-04-09 00:04:11.439723 | orchestrator | Outputs: 2026-04-09 00:04:11.439727 | orchestrator | 2026-04-09 00:04:11.439731 | orchestrator | manager_address = 2026-04-09 00:04:11.439736 | orchestrator | private_key = 2026-04-09 00:04:11.515578 | orchestrator | ok: Runtime: 0:01:41.484814 2026-04-09 00:04:11.536097 | 2026-04-09 00:04:11.536235 | TASK [Fetch manager address] 2026-04-09 00:04:12.034452 | orchestrator | ok 2026-04-09 00:04:12.045504 | 2026-04-09 00:04:12.045653 | TASK [Set manager_host address] 2026-04-09 00:04:12.127307 | orchestrator | ok 2026-04-09 00:04:12.136570 | 2026-04-09 00:04:12.136699 | LOOP [Update ansible collections] 2026-04-09 00:04:13.278570 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:04:13.279102 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:04:13.279162 | orchestrator | Starting galaxy collection install process 2026-04-09 00:04:13.279188 | orchestrator | Process install dependency map 2026-04-09 00:04:13.279210 | orchestrator | Starting collection install process 2026-04-09 00:04:13.279231 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-04-09 00:04:13.279257 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-04-09 00:04:13.279284 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-09 00:04:13.279345 | orchestrator | ok: Item: commons Runtime: 0:00:00.783589 2026-04-09 00:04:14.537849 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:04:14.538043 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-09 00:04:14.538100 | orchestrator | Starting galaxy collection install process 2026-04-09 00:04:14.538143 | orchestrator | Process install dependency map 2026-04-09 00:04:14.538182 | orchestrator | Starting collection install process 2026-04-09 00:04:14.538218 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-04-09 00:04:14.538254 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-04-09 00:04:14.538290 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-09 00:04:14.538346 | orchestrator | ok: Item: services Runtime: 0:00:00.978483 2026-04-09 00:04:14.554981 | 2026-04-09 00:04:14.555141 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:04:25.156775 | orchestrator | ok 2026-04-09 00:04:25.168488 | 2026-04-09 00:04:25.168613 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:05:25.205752 | orchestrator | ok 2026-04-09 00:05:25.218556 | 2026-04-09 00:05:25.218698 | TASK [Fetch manager ssh hostkey] 2026-04-09 00:05:26.795669 | orchestrator | Output suppressed because no_log was given 2026-04-09 00:05:26.805711 | 2026-04-09 00:05:26.806070 | TASK [Get ssh keypair from terraform environment] 2026-04-09 00:05:27.392087 | orchestrator | ok: Runtime: 0:00:00.006875 2026-04-09 00:05:27.400217 | 2026-04-09 00:05:27.400340 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:05:27.459138 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-09 00:05:27.466327 | 2026-04-09 00:05:27.466454 | TASK [Run manager part 0] 2026-04-09 00:05:28.448007 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:05:28.500443 | orchestrator | 2026-04-09 00:05:28.500491 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-09 00:05:28.500498 | orchestrator | 2026-04-09 00:05:28.500509 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-09 00:05:30.377907 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:30.377956 | orchestrator | 2026-04-09 00:05:30.377975 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:05:30.377984 | orchestrator | 2026-04-09 00:05:30.377992 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:05:32.243661 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:32.243719 | orchestrator | 2026-04-09 00:05:32.243737 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:05:32.910581 | orchestrator | ok: [testbed-manager] 2026-04-09 00:05:32.910649 | orchestrator | 2026-04-09 00:05:32.910662 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:05:32.950708 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:32.950769 | orchestrator | 2026-04-09 00:05:32.950783 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-09 00:05:32.985525 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:32.985586 | orchestrator | 2026-04-09 00:05:32.985598 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-09 00:05:33.019158 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:05:33.019213 | orchestrator | 2026-04-09 00:05:33.019220 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-09 00:05:33.705414 | orchestrator | changed: [testbed-manager] 2026-04-09 00:05:33.705475 | orchestrator | 2026-04-09 00:05:33.705483 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-09 00:08:17.461802 | orchestrator | changed: [testbed-manager] 2026-04-09 00:08:17.461908 | orchestrator | 2026-04-09 00:08:17.461930 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:09:34.888406 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:34.888476 | orchestrator | 2026-04-09 00:09:34.888497 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-09 00:09:54.960622 | orchestrator | changed: [testbed-manager] 2026-04-09 00:09:54.960687 | orchestrator | 2026-04-09 00:09:54.960698 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-09 00:10:03.617396 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:03.617444 | orchestrator | 2026-04-09 00:10:03.617452 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:10:03.663326 | orchestrator | ok: [testbed-manager] 2026-04-09 00:10:03.663363 | orchestrator | 2026-04-09 00:10:03.663371 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-09 00:10:05.245933 | orchestrator | ok: [testbed-manager] 2026-04-09 00:10:05.246091 | orchestrator | 2026-04-09 00:10:05.246113 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-09 00:10:05.985130 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:05.985199 | orchestrator | 2026-04-09 00:10:05.985217 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-09 00:10:11.786364 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:11.786415 | orchestrator | 2026-04-09 00:10:11.786424 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-09 00:10:17.347317 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:17.347536 | orchestrator | 2026-04-09 00:10:17.347555 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-09 00:10:19.899154 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:19.900087 | orchestrator | 2026-04-09 00:10:19.900116 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-09 00:10:21.593657 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:21.593751 | orchestrator | 2026-04-09 00:10:21.593818 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-09 00:10:22.694209 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:22.694348 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:22.694378 | orchestrator | 2026-04-09 00:10:22.694403 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-09 00:10:22.736813 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:10:22.736892 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:10:22.736906 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:10:22.736918 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:10:29.463254 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-09 00:10:29.463316 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-09 00:10:29.463322 | orchestrator | 2026-04-09 00:10:29.463327 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-09 00:10:30.034343 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:30.034457 | orchestrator | 2026-04-09 00:10:30.034473 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-09 00:10:54.384226 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-09 00:10:54.384331 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-09 00:10:54.384347 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-09 00:10:54.384358 | orchestrator | 2026-04-09 00:10:54.384370 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-09 00:10:56.679340 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-09 00:10:56.679558 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-09 00:10:56.679576 | orchestrator | 2026-04-09 00:10:56.679592 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-09 00:10:56.679604 | orchestrator | 2026-04-09 00:10:56.679616 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:10:58.072780 | orchestrator | ok: [testbed-manager] 2026-04-09 00:10:58.072868 | orchestrator | 2026-04-09 00:10:58.072885 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:10:58.135230 | orchestrator | ok: [testbed-manager] 2026-04-09 00:10:58.135319 | orchestrator | 2026-04-09 00:10:58.135336 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:10:58.207144 | orchestrator | ok: [testbed-manager] 2026-04-09 00:10:58.207192 | orchestrator | 2026-04-09 00:10:58.207200 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:10:59.023004 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:59.023049 | orchestrator | 2026-04-09 00:10:59.023061 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:10:59.746014 | orchestrator | changed: [testbed-manager] 2026-04-09 00:10:59.746127 | orchestrator | 2026-04-09 00:10:59.746145 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:11:01.093921 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-09 00:11:01.094012 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-09 00:11:01.094063 | orchestrator | 2026-04-09 00:11:01.094077 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:11:02.493202 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:02.493289 | orchestrator | 2026-04-09 00:11:02.493307 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:11:04.225095 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:11:04.225177 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-09 00:11:04.225207 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:11:04.225220 | orchestrator | 2026-04-09 00:11:04.225233 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:11:04.285185 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:04.285276 | orchestrator | 2026-04-09 00:11:04.285294 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:11:04.362998 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:04.363075 | orchestrator | 2026-04-09 00:11:04.363088 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:11:04.928577 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:04.929594 | orchestrator | 2026-04-09 00:11:04.929621 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:11:05.006922 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:05.006969 | orchestrator | 2026-04-09 00:11:05.006979 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:11:05.890308 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:11:05.890354 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:05.890363 | orchestrator | 2026-04-09 00:11:05.890371 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:11:05.939330 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:05.939378 | orchestrator | 2026-04-09 00:11:05.939390 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:11:05.977195 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:05.977233 | orchestrator | 2026-04-09 00:11:05.977239 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:11:06.013626 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:06.013683 | orchestrator | 2026-04-09 00:11:06.013690 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:11:06.091825 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:06.091910 | orchestrator | 2026-04-09 00:11:06.091925 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:11:06.816874 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:06.816960 | orchestrator | 2026-04-09 00:11:06.816976 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-09 00:11:06.816989 | orchestrator | 2026-04-09 00:11:06.817003 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:11:08.184143 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:08.184229 | orchestrator | 2026-04-09 00:11:08.184255 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-09 00:11:09.128900 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:09.128950 | orchestrator | 2026-04-09 00:11:09.128956 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:11:09.128962 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-09 00:11:09.128967 | orchestrator | 2026-04-09 00:11:09.706776 | orchestrator | ok: Runtime: 0:05:41.373222 2026-04-09 00:11:09.725625 | 2026-04-09 00:11:09.725780 | TASK [Point out that the log in on the manager is now possible] 2026-04-09 00:11:09.760378 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-09 00:11:09.770084 | 2026-04-09 00:11:09.770235 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-09 00:11:09.817432 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-09 00:11:09.826875 | 2026-04-09 00:11:09.826994 | TASK [Run manager part 1 + 2] 2026-04-09 00:11:11.573970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-09 00:11:11.632445 | orchestrator | 2026-04-09 00:11:11.632510 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-09 00:11:11.632519 | orchestrator | 2026-04-09 00:11:11.632537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:11:14.162252 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:14.162310 | orchestrator | 2026-04-09 00:11:14.162333 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-09 00:11:14.202169 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:14.202226 | orchestrator | 2026-04-09 00:11:14.202238 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-09 00:11:14.252104 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:14.252155 | orchestrator | 2026-04-09 00:11:14.252167 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:11:14.299327 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:14.299387 | orchestrator | 2026-04-09 00:11:14.299398 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:11:14.376496 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:14.376550 | orchestrator | 2026-04-09 00:11:14.376561 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:11:14.437011 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:14.437072 | orchestrator | 2026-04-09 00:11:14.437084 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:11:14.487355 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-09 00:11:14.487406 | orchestrator | 2026-04-09 00:11:14.487411 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:11:15.235234 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:15.236380 | orchestrator | 2026-04-09 00:11:15.236403 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:11:15.286813 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:15.286868 | orchestrator | 2026-04-09 00:11:15.286876 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:11:16.672854 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:16.672912 | orchestrator | 2026-04-09 00:11:16.672919 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:11:17.245215 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:17.245278 | orchestrator | 2026-04-09 00:11:17.245287 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:11:18.437948 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:18.438066 | orchestrator | 2026-04-09 00:11:18.438086 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:11:35.162897 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:35.162972 | orchestrator | 2026-04-09 00:11:35.162983 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-09 00:11:35.837001 | orchestrator | ok: [testbed-manager] 2026-04-09 00:11:35.837085 | orchestrator | 2026-04-09 00:11:35.837102 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-09 00:11:35.895784 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:35.895869 | orchestrator | 2026-04-09 00:11:35.895885 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-09 00:11:36.857953 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:36.858085 | orchestrator | 2026-04-09 00:11:36.858103 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-09 00:11:37.805988 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:37.806059 | orchestrator | 2026-04-09 00:11:37.806068 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-09 00:11:38.361159 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:38.361223 | orchestrator | 2026-04-09 00:11:38.361238 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-09 00:11:38.399211 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-09 00:11:38.399284 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-09 00:11:38.399294 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-09 00:11:38.399301 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-09 00:11:41.261649 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:41.261722 | orchestrator | 2026-04-09 00:11:41.261736 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-09 00:11:49.874880 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-09 00:11:49.874973 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-09 00:11:49.874991 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-09 00:11:49.875007 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-09 00:11:49.875026 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-09 00:11:49.875037 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-09 00:11:49.875048 | orchestrator | 2026-04-09 00:11:49.875060 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-09 00:11:50.923166 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:50.923209 | orchestrator | 2026-04-09 00:11:50.923217 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-09 00:11:54.016899 | orchestrator | changed: [testbed-manager] 2026-04-09 00:11:54.016938 | orchestrator | 2026-04-09 00:11:54.016943 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-09 00:11:54.061445 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:11:54.061490 | orchestrator | 2026-04-09 00:11:54.061499 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-09 00:13:32.141791 | orchestrator | changed: [testbed-manager] 2026-04-09 00:13:32.141829 | orchestrator | 2026-04-09 00:13:32.141836 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:13:33.299658 | orchestrator | ok: [testbed-manager] 2026-04-09 00:13:33.299748 | orchestrator | 2026-04-09 00:13:33.299768 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:13:33.299784 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-09 00:13:33.299798 | orchestrator | 2026-04-09 00:13:33.461446 | orchestrator | ok: Runtime: 0:02:23.255836 2026-04-09 00:13:33.473237 | 2026-04-09 00:13:33.473371 | TASK [Reboot manager] 2026-04-09 00:13:35.010253 | orchestrator | ok: Runtime: 0:00:00.940981 2026-04-09 00:13:35.024234 | 2026-04-09 00:13:35.024371 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-09 00:13:48.974469 | orchestrator | ok 2026-04-09 00:13:48.986703 | 2026-04-09 00:13:48.986962 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-09 00:14:49.037471 | orchestrator | ok 2026-04-09 00:14:49.046299 | 2026-04-09 00:14:49.046444 | TASK [Deploy manager + bootstrap nodes] 2026-04-09 00:14:51.531258 | orchestrator | 2026-04-09 00:14:51.531446 | orchestrator | # DEPLOY MANAGER 2026-04-09 00:14:51.531471 | orchestrator | 2026-04-09 00:14:51.531485 | orchestrator | + set -e 2026-04-09 00:14:51.531499 | orchestrator | + echo 2026-04-09 00:14:51.531513 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-09 00:14:51.531531 | orchestrator | + echo 2026-04-09 00:14:51.531578 | orchestrator | + cat /opt/manager-vars.sh 2026-04-09 00:14:51.534757 | orchestrator | export NUMBER_OF_NODES=6 2026-04-09 00:14:51.534788 | orchestrator | 2026-04-09 00:14:51.534803 | orchestrator | export CEPH_VERSION= 2026-04-09 00:14:51.534816 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-09 00:14:51.534829 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-09 00:14:51.534841 | orchestrator | export OPENSTACK_VERSION= 2026-04-09 00:14:51.534852 | orchestrator | 2026-04-09 00:14:51.534863 | orchestrator | export ARA=false 2026-04-09 00:14:51.534879 | orchestrator | export DEPLOY_MODE=manager 2026-04-09 00:14:51.534891 | orchestrator | export TEMPEST=true 2026-04-09 00:14:51.534903 | orchestrator | export IS_ZUUL=true 2026-04-09 00:14:51.534921 | orchestrator | 2026-04-09 00:14:51.534939 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:14:51.534950 | orchestrator | export EXTERNAL_API=false 2026-04-09 00:14:51.534961 | orchestrator | 2026-04-09 00:14:51.534979 | orchestrator | export IMAGE_USER=ubuntu 2026-04-09 00:14:51.534990 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-09 00:14:51.535004 | orchestrator | 2026-04-09 00:14:51.535026 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-09 00:14:51.535045 | orchestrator | 2026-04-09 00:14:51.535056 | orchestrator | + echo 2026-04-09 00:14:51.535068 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:14:51.535779 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:14:51.535799 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:14:51.535813 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:14:51.535826 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:14:51.535865 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:14:51.535878 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:14:51.535889 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:14:51.535900 | orchestrator | ++ export CEPH_VERSION= 2026-04-09 00:14:51.535911 | orchestrator | ++ CEPH_VERSION= 2026-04-09 00:14:51.535991 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:14:51.536007 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:14:51.536019 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 00:14:51.536030 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 00:14:51.536040 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-09 00:14:51.536051 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-09 00:14:51.536062 | orchestrator | ++ export ARA=false 2026-04-09 00:14:51.536073 | orchestrator | ++ ARA=false 2026-04-09 00:14:51.536084 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:14:51.536104 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:14:51.536115 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:14:51.536126 | orchestrator | ++ TEMPEST=true 2026-04-09 00:14:51.536141 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:14:51.536153 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:14:51.536164 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:14:51.536176 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:14:51.536187 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:14:51.536197 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:14:51.536242 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:14:51.536253 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:14:51.536265 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:14:51.536276 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:14:51.536287 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:14:51.536298 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:14:51.536310 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-09 00:14:51.592683 | orchestrator | + docker version 2026-04-09 00:14:51.689909 | orchestrator | Client: Docker Engine - Community 2026-04-09 00:14:51.690004 | orchestrator | Version: 27.5.1 2026-04-09 00:14:51.690073 | orchestrator | API version: 1.47 2026-04-09 00:14:51.690087 | orchestrator | Go version: go1.22.11 2026-04-09 00:14:51.690098 | orchestrator | Git commit: 9f9e405 2026-04-09 00:14:51.690109 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:14:51.690121 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:14:51.690133 | orchestrator | Context: default 2026-04-09 00:14:51.690144 | orchestrator | 2026-04-09 00:14:51.690156 | orchestrator | Server: Docker Engine - Community 2026-04-09 00:14:51.690167 | orchestrator | Engine: 2026-04-09 00:14:51.690178 | orchestrator | Version: 27.5.1 2026-04-09 00:14:51.690189 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-09 00:14:51.690269 | orchestrator | Go version: go1.22.11 2026-04-09 00:14:51.690288 | orchestrator | Git commit: 4c9b3b0 2026-04-09 00:14:51.690306 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-09 00:14:51.690327 | orchestrator | OS/Arch: linux/amd64 2026-04-09 00:14:51.690340 | orchestrator | Experimental: false 2026-04-09 00:14:51.690351 | orchestrator | containerd: 2026-04-09 00:14:51.690361 | orchestrator | Version: v2.2.2 2026-04-09 00:14:51.690372 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-09 00:14:51.690383 | orchestrator | runc: 2026-04-09 00:14:51.690394 | orchestrator | Version: 1.3.4 2026-04-09 00:14:51.690405 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-09 00:14:51.690416 | orchestrator | docker-init: 2026-04-09 00:14:51.690426 | orchestrator | Version: 0.19.0 2026-04-09 00:14:51.690438 | orchestrator | GitCommit: de40ad0 2026-04-09 00:14:51.693468 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-09 00:14:51.702959 | orchestrator | + set -e 2026-04-09 00:14:51.703520 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:14:51.703543 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:14:51.703555 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:14:51.703567 | orchestrator | ++ export CEPH_VERSION= 2026-04-09 00:14:51.703578 | orchestrator | ++ CEPH_VERSION= 2026-04-09 00:14:51.703589 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:14:51.703601 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:14:51.703612 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 00:14:51.703625 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 00:14:51.703637 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-09 00:14:51.703648 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-09 00:14:51.703659 | orchestrator | ++ export ARA=false 2026-04-09 00:14:51.703670 | orchestrator | ++ ARA=false 2026-04-09 00:14:51.703681 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:14:51.703691 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:14:51.703702 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:14:51.703713 | orchestrator | ++ TEMPEST=true 2026-04-09 00:14:51.703723 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:14:51.703734 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:14:51.703745 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:14:51.703756 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:14:51.703767 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:14:51.703778 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:14:51.703788 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:14:51.703799 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:14:51.703810 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:14:51.703820 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:14:51.703831 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:14:51.703842 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:14:51.703853 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:14:51.703863 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:14:51.703874 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:14:51.703884 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:14:51.703899 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:14:51.703910 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-09 00:14:51.703922 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-09 00:14:51.710497 | orchestrator | + set -e 2026-04-09 00:14:51.710956 | orchestrator | + VERSION=10.0.0 2026-04-09 00:14:51.710973 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:14:51.717936 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-09 00:14:51.717998 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:14:51.722391 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-09 00:14:51.725915 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-09 00:14:51.732636 | orchestrator | /opt/configuration ~ 2026-04-09 00:14:51.732683 | orchestrator | + set -e 2026-04-09 00:14:51.732694 | orchestrator | + pushd /opt/configuration 2026-04-09 00:14:51.732704 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:14:51.733993 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 00:14:51.734982 | orchestrator | ++ deactivate nondestructive 2026-04-09 00:14:51.735008 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:51.735018 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:51.735027 | orchestrator | ++ hash -r 2026-04-09 00:14:51.735060 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:51.735069 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 00:14:51.735078 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 00:14:51.735087 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 00:14:51.735102 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 00:14:51.735111 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 00:14:51.735120 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 00:14:51.735129 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 00:14:51.735139 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:14:51.735149 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:14:51.735158 | orchestrator | ++ export PATH 2026-04-09 00:14:51.735167 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:51.735175 | orchestrator | ++ '[' -z '' ']' 2026-04-09 00:14:51.735184 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 00:14:51.735193 | orchestrator | ++ PS1='(venv) ' 2026-04-09 00:14:51.735222 | orchestrator | ++ export PS1 2026-04-09 00:14:51.735232 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 00:14:51.735241 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 00:14:51.735249 | orchestrator | ++ hash -r 2026-04-09 00:14:51.735258 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-09 00:14:52.734296 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-09 00:14:52.735007 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-09 00:14:52.736310 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-09 00:14:52.737489 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-09 00:14:52.738625 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-09 00:14:52.748548 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-09 00:14:52.749999 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-09 00:14:52.750833 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-09 00:14:52.752280 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-09 00:14:52.781767 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-09 00:14:52.783257 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-09 00:14:52.784686 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-09 00:14:52.785948 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-09 00:14:52.789775 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-09 00:14:52.990080 | orchestrator | ++ which gilt 2026-04-09 00:14:52.994008 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-09 00:14:52.994111 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-09 00:14:53.225873 | orchestrator | osism.cfg-generics: 2026-04-09 00:14:53.375108 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-09 00:14:53.375245 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-09 00:14:53.375567 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-09 00:14:53.375708 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-09 00:14:54.037986 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-09 00:14:54.048462 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-09 00:14:54.504788 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-09 00:14:54.551886 | orchestrator | ~ 2026-04-09 00:14:54.551982 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:14:54.551998 | orchestrator | + deactivate 2026-04-09 00:14:54.552013 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 00:14:54.552026 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:14:54.552038 | orchestrator | + export PATH 2026-04-09 00:14:54.552050 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 00:14:54.552062 | orchestrator | + '[' -n '' ']' 2026-04-09 00:14:54.552073 | orchestrator | + hash -r 2026-04-09 00:14:54.552084 | orchestrator | + '[' -n '' ']' 2026-04-09 00:14:54.552095 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 00:14:54.552106 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 00:14:54.552117 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 00:14:54.552128 | orchestrator | + unset -f deactivate 2026-04-09 00:14:54.552140 | orchestrator | + popd 2026-04-09 00:14:54.553784 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-09 00:14:54.553811 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-09 00:14:54.554690 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-09 00:14:54.608228 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:14:54.608305 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-09 00:14:54.608797 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-09 00:14:54.683701 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:14:54.683795 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 00:14:54.690795 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-09 00:14:54.695311 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-09 00:14:54.787047 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:14:54.787142 | orchestrator | + source /opt/venv/bin/activate 2026-04-09 00:14:54.787157 | orchestrator | ++ deactivate nondestructive 2026-04-09 00:14:54.787169 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:54.787180 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:54.787192 | orchestrator | ++ hash -r 2026-04-09 00:14:54.787236 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:54.787251 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-09 00:14:54.787262 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-09 00:14:54.787273 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-09 00:14:54.787285 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-09 00:14:54.787297 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-09 00:14:54.787308 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-09 00:14:54.787319 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-09 00:14:54.787331 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:14:54.787343 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:14:54.787354 | orchestrator | ++ export PATH 2026-04-09 00:14:54.787369 | orchestrator | ++ '[' -n '' ']' 2026-04-09 00:14:54.787469 | orchestrator | ++ '[' -z '' ']' 2026-04-09 00:14:54.787486 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-09 00:14:54.787498 | orchestrator | ++ PS1='(venv) ' 2026-04-09 00:14:54.787513 | orchestrator | ++ export PS1 2026-04-09 00:14:54.787524 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-09 00:14:54.787535 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-09 00:14:54.787546 | orchestrator | ++ hash -r 2026-04-09 00:14:54.787593 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-09 00:14:55.874998 | orchestrator | 2026-04-09 00:14:55.875090 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-09 00:14:55.875103 | orchestrator | 2026-04-09 00:14:55.875112 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:14:56.447649 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:56.447753 | orchestrator | 2026-04-09 00:14:56.447772 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:14:57.387636 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:57.387737 | orchestrator | 2026-04-09 00:14:57.387754 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-09 00:14:57.387767 | orchestrator | 2026-04-09 00:14:57.387778 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:14:59.518069 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:59.518260 | orchestrator | 2026-04-09 00:14:59.519516 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-09 00:14:59.562683 | orchestrator | ok: [testbed-manager] 2026-04-09 00:14:59.562780 | orchestrator | 2026-04-09 00:14:59.562797 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-09 00:14:59.986634 | orchestrator | changed: [testbed-manager] 2026-04-09 00:14:59.986732 | orchestrator | 2026-04-09 00:14:59.986750 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-09 00:15:00.013156 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:00.013275 | orchestrator | 2026-04-09 00:15:00.013291 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-09 00:15:00.345189 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:00.345311 | orchestrator | 2026-04-09 00:15:00.345326 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-09 00:15:00.655998 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:00.656099 | orchestrator | 2026-04-09 00:15:00.656117 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-09 00:15:00.764610 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:00.764721 | orchestrator | 2026-04-09 00:15:00.764747 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-09 00:15:00.764768 | orchestrator | 2026-04-09 00:15:00.764786 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:15:02.527768 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:02.527869 | orchestrator | 2026-04-09 00:15:02.527906 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-09 00:15:02.652134 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-09 00:15:02.652247 | orchestrator | 2026-04-09 00:15:02.652266 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-09 00:15:02.708765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-09 00:15:02.708850 | orchestrator | 2026-04-09 00:15:02.708867 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-09 00:15:03.818624 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-09 00:15:03.818767 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-09 00:15:03.818786 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-09 00:15:03.818798 | orchestrator | 2026-04-09 00:15:03.818811 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-09 00:15:05.560710 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-09 00:15:05.560888 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-09 00:15:05.560904 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-09 00:15:05.560916 | orchestrator | 2026-04-09 00:15:05.560929 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-09 00:15:06.201728 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:15:06.201833 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:06.201852 | orchestrator | 2026-04-09 00:15:06.201865 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-09 00:15:06.848338 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:15:06.848420 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:06.848436 | orchestrator | 2026-04-09 00:15:06.848449 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-09 00:15:06.904879 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:06.904997 | orchestrator | 2026-04-09 00:15:06.905015 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-09 00:15:07.263124 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:07.263300 | orchestrator | 2026-04-09 00:15:07.263331 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-09 00:15:07.325461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-09 00:15:07.325563 | orchestrator | 2026-04-09 00:15:07.325578 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-09 00:15:08.372613 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:08.372696 | orchestrator | 2026-04-09 00:15:08.372712 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-09 00:15:09.136533 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:09.136637 | orchestrator | 2026-04-09 00:15:09.136654 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-09 00:15:42.076280 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:42.076378 | orchestrator | 2026-04-09 00:15:42.076396 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-09 00:15:42.142214 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:42.142298 | orchestrator | 2026-04-09 00:15:42.142314 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-09 00:15:42.142326 | orchestrator | 2026-04-09 00:15:42.142338 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:15:43.946446 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:43.946555 | orchestrator | 2026-04-09 00:15:43.946572 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-09 00:15:44.056434 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-09 00:15:44.056526 | orchestrator | 2026-04-09 00:15:44.056543 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-09 00:15:44.112446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:15:44.112533 | orchestrator | 2026-04-09 00:15:44.112548 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-09 00:15:46.461734 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:46.461850 | orchestrator | 2026-04-09 00:15:46.461877 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-09 00:15:46.515722 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:46.515812 | orchestrator | 2026-04-09 00:15:46.515827 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-09 00:15:46.639230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-09 00:15:46.639361 | orchestrator | 2026-04-09 00:15:46.639401 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-09 00:15:49.416561 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-09 00:15:49.416696 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-09 00:15:49.416716 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-09 00:15:49.416728 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-09 00:15:49.416740 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-09 00:15:49.416751 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-09 00:15:49.416762 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-09 00:15:49.416774 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-09 00:15:49.416786 | orchestrator | 2026-04-09 00:15:49.416799 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-09 00:15:50.044831 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:50.044943 | orchestrator | 2026-04-09 00:15:50.044963 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-09 00:15:50.657401 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:50.657482 | orchestrator | 2026-04-09 00:15:50.657495 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-09 00:15:50.736226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-09 00:15:50.736350 | orchestrator | 2026-04-09 00:15:50.736366 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-09 00:15:51.924242 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-09 00:15:51.924352 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-09 00:15:51.924381 | orchestrator | 2026-04-09 00:15:51.924401 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-09 00:15:52.535693 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:52.535793 | orchestrator | 2026-04-09 00:15:52.535812 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-09 00:15:52.592358 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:52.592443 | orchestrator | 2026-04-09 00:15:52.592458 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-09 00:15:52.670186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-09 00:15:52.670274 | orchestrator | 2026-04-09 00:15:52.670288 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-09 00:15:53.295174 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:53.295275 | orchestrator | 2026-04-09 00:15:53.295291 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-09 00:15:53.355928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-09 00:15:53.356016 | orchestrator | 2026-04-09 00:15:53.356031 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-09 00:15:54.683269 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:15:54.683369 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:15:54.683384 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:54.683396 | orchestrator | 2026-04-09 00:15:54.683407 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-09 00:15:55.294327 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:55.294541 | orchestrator | 2026-04-09 00:15:55.294561 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-09 00:15:55.351910 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:55.352001 | orchestrator | 2026-04-09 00:15:55.352016 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-09 00:15:55.453555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-09 00:15:55.453648 | orchestrator | 2026-04-09 00:15:55.453665 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-09 00:15:55.968385 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:55.968479 | orchestrator | 2026-04-09 00:15:55.968495 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-09 00:15:56.353249 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:56.353347 | orchestrator | 2026-04-09 00:15:56.353366 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-09 00:15:57.596284 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-09 00:15:57.596404 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-09 00:15:57.596421 | orchestrator | 2026-04-09 00:15:57.596434 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-09 00:15:58.265316 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:58.265409 | orchestrator | 2026-04-09 00:15:58.265426 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-09 00:15:58.644572 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:58.644665 | orchestrator | 2026-04-09 00:15:58.644681 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-09 00:15:59.002379 | orchestrator | changed: [testbed-manager] 2026-04-09 00:15:59.002458 | orchestrator | 2026-04-09 00:15:59.002471 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-09 00:15:59.040891 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:15:59.040989 | orchestrator | 2026-04-09 00:15:59.041007 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-09 00:15:59.109179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-09 00:15:59.109266 | orchestrator | 2026-04-09 00:15:59.109280 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-09 00:15:59.159982 | orchestrator | ok: [testbed-manager] 2026-04-09 00:15:59.160073 | orchestrator | 2026-04-09 00:15:59.160086 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-09 00:16:01.165794 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-09 00:16:01.165894 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-09 00:16:01.165910 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-09 00:16:01.165922 | orchestrator | 2026-04-09 00:16:01.165934 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-09 00:16:01.857100 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:01.857195 | orchestrator | 2026-04-09 00:16:01.857205 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-09 00:16:02.547542 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:02.547645 | orchestrator | 2026-04-09 00:16:02.547663 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-09 00:16:03.252186 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:03.252283 | orchestrator | 2026-04-09 00:16:03.252300 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-09 00:16:03.327554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-09 00:16:03.327648 | orchestrator | 2026-04-09 00:16:03.327665 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-09 00:16:03.379022 | orchestrator | ok: [testbed-manager] 2026-04-09 00:16:03.379107 | orchestrator | 2026-04-09 00:16:03.379121 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-09 00:16:04.080050 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-09 00:16:04.080211 | orchestrator | 2026-04-09 00:16:04.080241 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-09 00:16:04.162424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-09 00:16:04.162525 | orchestrator | 2026-04-09 00:16:04.162540 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-09 00:16:04.890374 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:04.890454 | orchestrator | 2026-04-09 00:16:04.890462 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-09 00:16:05.507970 | orchestrator | ok: [testbed-manager] 2026-04-09 00:16:05.508065 | orchestrator | 2026-04-09 00:16:05.508083 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-09 00:16:05.563780 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:16:05.563868 | orchestrator | 2026-04-09 00:16:05.563884 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-09 00:16:05.623922 | orchestrator | ok: [testbed-manager] 2026-04-09 00:16:05.624008 | orchestrator | 2026-04-09 00:16:05.624022 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-09 00:16:06.431114 | orchestrator | changed: [testbed-manager] 2026-04-09 00:16:06.431259 | orchestrator | 2026-04-09 00:16:06.431278 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-09 00:17:19.541380 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:19.541498 | orchestrator | 2026-04-09 00:17:19.541516 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-09 00:17:20.516924 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:20.516997 | orchestrator | 2026-04-09 00:17:20.517003 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-09 00:17:20.577105 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:20.577226 | orchestrator | 2026-04-09 00:17:20.577239 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-09 00:17:23.068980 | orchestrator | changed: [testbed-manager] 2026-04-09 00:17:23.069080 | orchestrator | 2026-04-09 00:17:23.069099 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-09 00:17:23.131531 | orchestrator | ok: [testbed-manager] 2026-04-09 00:17:23.131614 | orchestrator | 2026-04-09 00:17:23.131627 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:17:23.131636 | orchestrator | 2026-04-09 00:17:23.131645 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-09 00:17:23.278638 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:17:23.278734 | orchestrator | 2026-04-09 00:17:23.278750 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-09 00:18:23.326093 | orchestrator | Pausing for 60 seconds 2026-04-09 00:18:23.326235 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:23.326253 | orchestrator | 2026-04-09 00:18:23.326266 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-09 00:18:27.352649 | orchestrator | changed: [testbed-manager] 2026-04-09 00:18:27.352735 | orchestrator | 2026-04-09 00:18:27.352751 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-09 00:19:29.230643 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-09 00:19:29.230745 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-09 00:19:29.230760 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-09 00:19:29.230772 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:29.230786 | orchestrator | 2026-04-09 00:19:29.230798 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-09 00:19:34.612681 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:34.612789 | orchestrator | 2026-04-09 00:19:34.612808 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-09 00:19:34.686599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-09 00:19:34.686699 | orchestrator | 2026-04-09 00:19:34.686712 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-09 00:19:34.686722 | orchestrator | 2026-04-09 00:19:34.686731 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-09 00:19:34.725490 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:34.725582 | orchestrator | 2026-04-09 00:19:34.725602 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-09 00:19:34.792964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-09 00:19:34.793065 | orchestrator | 2026-04-09 00:19:34.793124 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-09 00:19:35.499176 | orchestrator | changed: [testbed-manager] 2026-04-09 00:19:35.499276 | orchestrator | 2026-04-09 00:19:35.499294 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-09 00:19:38.385780 | orchestrator | ok: [testbed-manager] 2026-04-09 00:19:38.385884 | orchestrator | 2026-04-09 00:19:38.385901 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-09 00:19:38.455885 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:19:38.455976 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-09 00:19:38.455992 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-09 00:19:38.456005 | orchestrator | "Checking running containers against expected versions...", 2026-04-09 00:19:38.456017 | orchestrator | "", 2026-04-09 00:19:38.456029 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-09 00:19:38.456040 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-09 00:19:38.456052 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456064 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-09 00:19:38.456075 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456128 | orchestrator | "", 2026-04-09 00:19:38.456172 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-09 00:19:38.456185 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-09 00:19:38.456196 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456207 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-09 00:19:38.456218 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456229 | orchestrator | "", 2026-04-09 00:19:38.456240 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-09 00:19:38.456251 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-09 00:19:38.456262 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456273 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-09 00:19:38.456283 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456294 | orchestrator | "", 2026-04-09 00:19:38.456305 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-09 00:19:38.456316 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-09 00:19:38.456327 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456338 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-09 00:19:38.456348 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456359 | orchestrator | "", 2026-04-09 00:19:38.456370 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-09 00:19:38.456381 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-09 00:19:38.456391 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456402 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-09 00:19:38.456413 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456424 | orchestrator | "", 2026-04-09 00:19:38.456437 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-09 00:19:38.456450 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456463 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456476 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456489 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456501 | orchestrator | "", 2026-04-09 00:19:38.456514 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-09 00:19:38.456527 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:19:38.456539 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456553 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-09 00:19:38.456566 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456579 | orchestrator | "", 2026-04-09 00:19:38.456592 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-09 00:19:38.456605 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:19:38.456618 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456630 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-09 00:19:38.456642 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456655 | orchestrator | "", 2026-04-09 00:19:38.456667 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-09 00:19:38.456680 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-09 00:19:38.456692 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456705 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-09 00:19:38.456717 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456730 | orchestrator | "", 2026-04-09 00:19:38.456743 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-09 00:19:38.456756 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:19:38.456769 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456781 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-09 00:19:38.456792 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456811 | orchestrator | "", 2026-04-09 00:19:38.456822 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-09 00:19:38.456833 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456844 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456854 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456871 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456882 | orchestrator | "", 2026-04-09 00:19:38.456893 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-09 00:19:38.456904 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456914 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456925 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456936 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.456948 | orchestrator | "", 2026-04-09 00:19:38.456959 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-09 00:19:38.456970 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.456980 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.456991 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.457002 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.457012 | orchestrator | "", 2026-04-09 00:19:38.457023 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-09 00:19:38.457034 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.457045 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.457056 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.457101 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.457114 | orchestrator | "", 2026-04-09 00:19:38.457124 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-09 00:19:38.457135 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.457146 | orchestrator | " Enabled: true", 2026-04-09 00:19:38.457157 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-09 00:19:38.457168 | orchestrator | " Status: ✅ MATCH", 2026-04-09 00:19:38.457179 | orchestrator | "", 2026-04-09 00:19:38.457190 | orchestrator | "=== Summary ===", 2026-04-09 00:19:38.457200 | orchestrator | "Errors (version mismatches): 0", 2026-04-09 00:19:38.457212 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-09 00:19:38.457222 | orchestrator | "", 2026-04-09 00:19:38.457233 | orchestrator | "✅ All running containers match expected versions!" 2026-04-09 00:19:38.457244 | orchestrator | ] 2026-04-09 00:19:38.457255 | orchestrator | } 2026-04-09 00:19:38.457267 | orchestrator | 2026-04-09 00:19:38.457278 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-09 00:19:38.510867 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:19:38.510952 | orchestrator | 2026-04-09 00:19:38.510966 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:19:38.510979 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-09 00:19:38.510991 | orchestrator | 2026-04-09 00:19:38.581803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-09 00:19:38.581893 | orchestrator | + deactivate 2026-04-09 00:19:38.581908 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-09 00:19:38.581922 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-09 00:19:38.581932 | orchestrator | + export PATH 2026-04-09 00:19:38.581944 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-09 00:19:38.581956 | orchestrator | + '[' -n '' ']' 2026-04-09 00:19:38.581967 | orchestrator | + hash -r 2026-04-09 00:19:38.581978 | orchestrator | + '[' -n '' ']' 2026-04-09 00:19:38.581988 | orchestrator | + unset VIRTUAL_ENV 2026-04-09 00:19:38.581999 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-09 00:19:38.582010 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-09 00:19:38.582121 | orchestrator | + unset -f deactivate 2026-04-09 00:19:38.582136 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-09 00:19:38.589621 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:19:38.589657 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:19:38.589669 | orchestrator | + local max_attempts=60 2026-04-09 00:19:38.589681 | orchestrator | + local name=ceph-ansible 2026-04-09 00:19:38.589692 | orchestrator | + local attempt_num=1 2026-04-09 00:19:38.590572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:19:38.631661 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:19:38.631744 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:19:38.631759 | orchestrator | + local max_attempts=60 2026-04-09 00:19:38.631771 | orchestrator | + local name=kolla-ansible 2026-04-09 00:19:38.631782 | orchestrator | + local attempt_num=1 2026-04-09 00:19:38.631794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:19:38.666140 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:19:38.666222 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:19:38.666236 | orchestrator | + local max_attempts=60 2026-04-09 00:19:38.666248 | orchestrator | + local name=osism-ansible 2026-04-09 00:19:38.666260 | orchestrator | + local attempt_num=1 2026-04-09 00:19:38.666406 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:19:38.696787 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:19:38.696881 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:19:38.696897 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:19:39.330148 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-09 00:19:39.505844 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-09 00:19:39.505938 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 00:19:39.505954 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 00:19:39.505966 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-09 00:19:39.505992 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-09 00:19:39.506003 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-09 00:19:39.506013 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-09 00:19:39.506144 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-09 00:19:39.506157 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-09 00:19:39.506168 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-09 00:19:39.506178 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-09 00:19:39.506189 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-09 00:19:39.506219 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-09 00:19:39.506231 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-09 00:19:39.506242 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-09 00:19:39.506254 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-09 00:19:39.510838 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-09 00:19:39.550817 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:19:39.550884 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-09 00:19:39.554642 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-09 00:19:51.768717 | orchestrator | 2026-04-09 00:19:51 | INFO  | Prepare task for execution of resolvconf. 2026-04-09 00:19:51.990752 | orchestrator | 2026-04-09 00:19:51 | INFO  | Task 802970ae-0488-461f-ba6b-6082298cdf9f (resolvconf) was prepared for execution. 2026-04-09 00:19:51.990868 | orchestrator | 2026-04-09 00:19:51 | INFO  | It takes a moment until task 802970ae-0488-461f-ba6b-6082298cdf9f (resolvconf) has been started and output is visible here. 2026-04-09 00:20:04.694329 | orchestrator | 2026-04-09 00:20:04.694506 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-09 00:20:04.694534 | orchestrator | 2026-04-09 00:20:04.694563 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:20:04.694582 | orchestrator | Thursday 09 April 2026 00:19:55 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-04-09 00:20:04.694599 | orchestrator | ok: [testbed-manager] 2026-04-09 00:20:04.694616 | orchestrator | 2026-04-09 00:20:04.694634 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:20:04.694652 | orchestrator | Thursday 09 April 2026 00:19:58 +0000 (0:00:03.639) 0:00:03.812 ******** 2026-04-09 00:20:04.694670 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:20:04.694693 | orchestrator | 2026-04-09 00:20:04.694713 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:20:04.694733 | orchestrator | Thursday 09 April 2026 00:19:58 +0000 (0:00:00.054) 0:00:03.866 ******** 2026-04-09 00:20:04.694752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-09 00:20:04.694773 | orchestrator | 2026-04-09 00:20:04.694791 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:20:04.694821 | orchestrator | Thursday 09 April 2026 00:19:58 +0000 (0:00:00.078) 0:00:03.945 ******** 2026-04-09 00:20:04.694845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:20:04.694877 | orchestrator | 2026-04-09 00:20:04.694894 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:20:04.694919 | orchestrator | Thursday 09 April 2026 00:19:58 +0000 (0:00:00.057) 0:00:04.003 ******** 2026-04-09 00:20:04.694948 | orchestrator | ok: [testbed-manager] 2026-04-09 00:20:04.694965 | orchestrator | 2026-04-09 00:20:04.694988 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:20:04.695023 | orchestrator | Thursday 09 April 2026 00:19:59 +0000 (0:00:01.126) 0:00:05.129 ******** 2026-04-09 00:20:04.695125 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:20:04.695208 | orchestrator | 2026-04-09 00:20:04.695238 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:20:04.695256 | orchestrator | Thursday 09 April 2026 00:20:00 +0000 (0:00:00.066) 0:00:05.196 ******** 2026-04-09 00:20:04.695273 | orchestrator | ok: [testbed-manager] 2026-04-09 00:20:04.695290 | orchestrator | 2026-04-09 00:20:04.695308 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:20:04.695327 | orchestrator | Thursday 09 April 2026 00:20:00 +0000 (0:00:00.563) 0:00:05.760 ******** 2026-04-09 00:20:04.695345 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:20:04.695363 | orchestrator | 2026-04-09 00:20:04.695380 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:20:04.695392 | orchestrator | Thursday 09 April 2026 00:20:00 +0000 (0:00:00.079) 0:00:05.840 ******** 2026-04-09 00:20:04.695403 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:04.695414 | orchestrator | 2026-04-09 00:20:04.695425 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:20:04.695436 | orchestrator | Thursday 09 April 2026 00:20:01 +0000 (0:00:00.601) 0:00:06.441 ******** 2026-04-09 00:20:04.695447 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:04.695457 | orchestrator | 2026-04-09 00:20:04.695468 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:20:04.695480 | orchestrator | Thursday 09 April 2026 00:20:02 +0000 (0:00:01.124) 0:00:07.565 ******** 2026-04-09 00:20:04.695491 | orchestrator | ok: [testbed-manager] 2026-04-09 00:20:04.695502 | orchestrator | 2026-04-09 00:20:04.695512 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:20:04.695524 | orchestrator | Thursday 09 April 2026 00:20:03 +0000 (0:00:00.985) 0:00:08.551 ******** 2026-04-09 00:20:04.695535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-09 00:20:04.695546 | orchestrator | 2026-04-09 00:20:04.695557 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:20:04.695568 | orchestrator | Thursday 09 April 2026 00:20:03 +0000 (0:00:00.084) 0:00:08.636 ******** 2026-04-09 00:20:04.695579 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:04.695590 | orchestrator | 2026-04-09 00:20:04.695600 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:20:04.695612 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:20:04.695623 | orchestrator | 2026-04-09 00:20:04.695634 | orchestrator | 2026-04-09 00:20:04.695645 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:20:04.695656 | orchestrator | Thursday 09 April 2026 00:20:04 +0000 (0:00:01.062) 0:00:09.698 ******** 2026-04-09 00:20:04.695667 | orchestrator | =============================================================================== 2026-04-09 00:20:04.695678 | orchestrator | Gathering Facts --------------------------------------------------------- 3.64s 2026-04-09 00:20:04.695688 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-04-09 00:20:04.695699 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2026-04-09 00:20:04.695710 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.06s 2026-04-09 00:20:04.695738 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-04-09 00:20:04.695750 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.60s 2026-04-09 00:20:04.695782 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-04-09 00:20:04.695794 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-09 00:20:04.695805 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-09 00:20:04.695824 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-09 00:20:04.695835 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-09 00:20:04.695846 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-04-09 00:20:04.695857 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-04-09 00:20:04.827971 | orchestrator | + osism apply sshconfig 2026-04-09 00:20:16.051245 | orchestrator | 2026-04-09 00:20:16 | INFO  | Prepare task for execution of sshconfig. 2026-04-09 00:20:16.127666 | orchestrator | 2026-04-09 00:20:16 | INFO  | Task 54a945d9-7460-43aa-b73f-031878e3b51d (sshconfig) was prepared for execution. 2026-04-09 00:20:16.127752 | orchestrator | 2026-04-09 00:20:16 | INFO  | It takes a moment until task 54a945d9-7460-43aa-b73f-031878e3b51d (sshconfig) has been started and output is visible here. 2026-04-09 00:20:27.128662 | orchestrator | 2026-04-09 00:20:27.128765 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-09 00:20:27.128781 | orchestrator | 2026-04-09 00:20:27.128792 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-09 00:20:27.128803 | orchestrator | Thursday 09 April 2026 00:20:19 +0000 (0:00:00.200) 0:00:00.200 ******** 2026-04-09 00:20:27.128813 | orchestrator | ok: [testbed-manager] 2026-04-09 00:20:27.128824 | orchestrator | 2026-04-09 00:20:27.128834 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-09 00:20:27.128844 | orchestrator | Thursday 09 April 2026 00:20:20 +0000 (0:00:00.844) 0:00:01.044 ******** 2026-04-09 00:20:27.128854 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:27.128864 | orchestrator | 2026-04-09 00:20:27.128874 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-09 00:20:27.128884 | orchestrator | Thursday 09 April 2026 00:20:20 +0000 (0:00:00.542) 0:00:01.587 ******** 2026-04-09 00:20:27.128894 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:20:27.128904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:20:27.128914 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:20:27.128923 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:20:27.128933 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:20:27.128943 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:20:27.128953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:20:27.128962 | orchestrator | 2026-04-09 00:20:27.128972 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-09 00:20:27.128982 | orchestrator | Thursday 09 April 2026 00:20:26 +0000 (0:00:05.671) 0:00:07.258 ******** 2026-04-09 00:20:27.128991 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:20:27.129001 | orchestrator | 2026-04-09 00:20:27.129011 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-09 00:20:27.129020 | orchestrator | Thursday 09 April 2026 00:20:26 +0000 (0:00:00.102) 0:00:07.361 ******** 2026-04-09 00:20:27.129030 | orchestrator | changed: [testbed-manager] 2026-04-09 00:20:27.129039 | orchestrator | 2026-04-09 00:20:27.129049 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:20:27.129060 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:20:27.129174 | orchestrator | 2026-04-09 00:20:27.129188 | orchestrator | 2026-04-09 00:20:27.129200 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:20:27.129212 | orchestrator | Thursday 09 April 2026 00:20:26 +0000 (0:00:00.547) 0:00:07.909 ******** 2026-04-09 00:20:27.129224 | orchestrator | =============================================================================== 2026-04-09 00:20:27.129236 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.67s 2026-04-09 00:20:27.129271 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.84s 2026-04-09 00:20:27.129284 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-04-09 00:20:27.129295 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-04-09 00:20:27.129307 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-09 00:20:27.288697 | orchestrator | + osism apply known-hosts 2026-04-09 00:20:38.509035 | orchestrator | 2026-04-09 00:20:38 | INFO  | Prepare task for execution of known-hosts. 2026-04-09 00:20:38.574314 | orchestrator | 2026-04-09 00:20:38 | INFO  | Task ea43b5ae-cecc-4d29-b1d2-3e100d4075e5 (known-hosts) was prepared for execution. 2026-04-09 00:20:38.574405 | orchestrator | 2026-04-09 00:20:38 | INFO  | It takes a moment until task ea43b5ae-cecc-4d29-b1d2-3e100d4075e5 (known-hosts) has been started and output is visible here. 2026-04-09 00:20:52.854601 | orchestrator | 2026-04-09 00:20:52.854718 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-09 00:20:52.854737 | orchestrator | 2026-04-09 00:20:52.854749 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-09 00:20:52.854762 | orchestrator | Thursday 09 April 2026 00:20:41 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-04-09 00:20:52.854775 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:20:52.854786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:20:52.854797 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:20:52.854818 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:20:52.854830 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:20:52.854840 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:20:52.854851 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:20:52.854862 | orchestrator | 2026-04-09 00:20:52.854873 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-09 00:20:52.854885 | orchestrator | Thursday 09 April 2026 00:20:47 +0000 (0:00:06.171) 0:00:06.343 ******** 2026-04-09 00:20:52.854897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:20:52.854910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:20:52.854921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:20:52.854932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:20:52.854943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:20:52.854953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:20:52.854964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:20:52.854974 | orchestrator | 2026-04-09 00:20:52.854985 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.854996 | orchestrator | Thursday 09 April 2026 00:20:47 +0000 (0:00:00.152) 0:00:06.495 ******** 2026-04-09 00:20:52.855027 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJdp1F1vzIzfoF2UbK71FZFCf184zInO5oINhY7xZTQE) 2026-04-09 00:20:52.855043 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLUIh1Y6hM8QQDqGPGW5fCkcBRqC9/XnxyVm6LHGhxy5YcgnS0g9oCf+HlJAQ12C9hDTA5MbR4Jw6H+EexhZTQLztwIZ8W+KTSXeIS6BqZ87xHq2fhosn8k27de2hNG7jpIiRY/AhzLrQlSJhoCQbElvU7Ygk2z+S+QqJb8wUdQ0R1ncTFC3jUus188P3IZohQQ1vrVG/8DZQUIacyGApXrEknrh3BOsBP3F7KBRXAb2Kzyzb7g69bn+J/EuBqKgY5wuPaw0cxsWfrOOUNMTBSfT/3SvYsJjqMSBnOJCPRkIFeDV+jNKfoJVdP0daZHfyjaLai90sISAXizM7NhxQpN5mBt0ePlKaW3kJDU/UVlex7otEUjHNF5tRbqgGK/2sYxiN58KZIIzwNc67W8wqvGkJKXXNBbGW3AyV59g3rXFYY1nQsHymCRHa8X2To/4RnbiMADA3oBRMGXHC5/qkARPjDlb3nfa4GtkvdoneMhPvFF+LKPAsnr+BQrwhzteU=) 2026-04-09 00:20:52.855131 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI49iBECNSLlvaKbYiHMr9AbQtj0atC/wRmWev+F+dOjYzTLxp6rUjmR0hIGRPuBwVoJVQh1mLPxifO1dTKDPjY=) 2026-04-09 00:20:52.855152 | orchestrator | 2026-04-09 00:20:52.855166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.855179 | orchestrator | Thursday 09 April 2026 00:20:48 +0000 (0:00:01.121) 0:00:07.617 ******** 2026-04-09 00:20:52.855192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsV2ew6r0ZzAzyWa32X7J+WzafZFlf0htaBwVUBiNATSxD/aYf/PVMFqb3kjvutRxM3VnrBfsbKRPdiSQH51zM=) 2026-04-09 00:20:52.855234 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0zUDK+02bSXcltpuPKVdfkEGoarITlNH1Qjoc8tIAk1W1mlQzd4xSi/OUxQHinr7N6TbQ3+rPst210EIdh+S6+IXcuy1NTowjwQPfe9Vm1OX+lh9yVfdMZf0CVe4rEkHlMg2+bmqTvRK3ihl/kU/X0MwJ+gJlKY3z6A84iv4yG62RBHh2qGiHAyU1ZWUdlCoHYeTZ/xau3RUWQTPUETcDBurBwnQvKREqmpFYI4bDPnLh2XtsjN+1Hg9208R/G1N9eADh1p8+F+IE+8a2POSDW7doiZK7LlDKQXkoqAqB7TJ5EmbVioeetqWvMgMObFpe8O3ydWvFIeLD7zUz4te5aqwuCh6BOtshZwYIBMv3wSOTji0nA04uxEyRQ7TQGAaWCTQcxCvL8xvcREsI88+C8Tkm7NC8Njv/81cxUgZw1IcfYAeRRVRi+Ks+Y6a4z3aBqQSCVYqLEmEC4gsIG5pMzpRx8bhYtCYBIxoqrOjAtSnsul2fAafRfXtSOIs5yZk=) 2026-04-09 00:20:52.855249 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHDz22sNShhTx992U2Sj1tKmHWmXOGLaMFCp1F0sI5z2) 2026-04-09 00:20:52.855262 | orchestrator | 2026-04-09 00:20:52.855275 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.855287 | orchestrator | Thursday 09 April 2026 00:20:49 +0000 (0:00:00.910) 0:00:08.528 ******** 2026-04-09 00:20:52.855300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDkpyyoEk0jauNnu/N58I1WK8InbMmdeRyJM2WGigluf1grc2/3nYj9o+4nkGtuwunrUVanGeT0tU9LzLSiZGV8w0IGHO3wW1Tfx1dzQMGb2jgTNfY3uYLhUd8F3YMoJmUUydhf362OrZnvuscqgK5ekQFhEg6Wme+jTnfGsuMKxGmX4qOWftOQf0JdhKXBL50xsAbEbsJUap6vTb5tiToWEZeeMfWHBFW9y4KJUHsllFUIrGPCRBmO23IRxlzm9Ffs3hg3u/UWMqaFvJIYiXt0rtz3D5h/No3CXufWI0fM5kelMIaPDp3hTi973xMqtQbTuGhJe+7SoQq21nl/2r8WXKVIcjXakrJid1XMv1clCulSAIeIhC9F0tv+AwfZishN1QYzWn4F0NMq842Io9lLGVgznsUCt/CAlkxNlo/kXWK7VaiYFTSJDP3IOuQw+vTAAWFFRSagjEHln+olPeXjyXtY8qjv6cMcwF2dFENs6MW5LcG98hCCDkLITo3Vh0=) 2026-04-09 00:20:52.855396 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGhbSAWUkOTMLOoJIlFpg3XcwGjJluys4jf42UK4MAuCCyuDkvp2LMu1bW0qyulGO9M/utGTTji+RemuSbbi7w=) 2026-04-09 00:20:52.855410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINoQ+QOp7IrsBgM43qwXS1yLI2gOd8llHjE+aBWWqoCT) 2026-04-09 00:20:52.855423 | orchestrator | 2026-04-09 00:20:52.855436 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.855448 | orchestrator | Thursday 09 April 2026 00:20:50 +0000 (0:00:00.912) 0:00:09.440 ******** 2026-04-09 00:20:52.855461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDy9OwnZlvOg/BKQQ14G8WYK9KtTlBXTOEuvjbzb7NbMuXh66g+EEHzofdqCLlNnRw3h2ECcvrFniRQ6WSImNFg=) 2026-04-09 00:20:52.855488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtySujex6fUU9kG7uFJlMNLLOZUnltrl4BDvkfOjtUJKS1hv8m6TUH3SZtbVaYsCVMDTADu0Q8OJ2FDkoqpWnQc4uUt2FdhvMAI86UUi3OcJJIAwwR8u/rpcx2cRHnMIqvg5WjQUV4f9/sq8U0vcKPBTVbkJ9jJ/XbyRp7y9aNT2cuofk7Q8dnhrwq67EoYmQoZdM2487Ltcyhr/W/SyGNiTnjEpSpWhsQa6TA2HbAU2MXEMxyxVmQ8a4dCtoSz8x9uNyfUKfqxvScMg+dfhQnmUWMbGYnztKBpnFXCR6aduxDJczvWOu98QI2X8qUXQEPxBIfl+GnPs4SBHw4ZvR/3FBWScs01clS/Jz1uVnbzYDkyOtEGH7eaACTNABTuEVWB2r4W+FeVNJABtCcIfLK0nAbP6iS63eigtcYqv/wlOQGAEqLPZcTDBf2nN3/MAPloV0so/CPHjT9lLl0hqLe1powcRYOvd7wl3mYhw7TjAV3PUkl29ZJf56Bj9KCJy0=) 2026-04-09 00:20:52.855500 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEK1vy67zfxndalEsQTUD0XRSO28BB6B86DeNpd6RHb) 2026-04-09 00:20:52.855511 | orchestrator | 2026-04-09 00:20:52.855522 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.855533 | orchestrator | Thursday 09 April 2026 00:20:51 +0000 (0:00:00.949) 0:00:10.389 ******** 2026-04-09 00:20:52.855544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCimTgWjblS2BSqdQA8WqAw+eTnlgj4vi0mnZRKtW4bhWItFGI4f+v1UOaHVSA7qB5dUq3UZxIQce7KwsUVK9CgOHyzm44lGJkbyvF3aCpdgZoeKfpQa7645RJH1pTI3xlJlPmuZLCGQXdc47akTLJNdKO7ZLwKWJv3d11pCHBI2eor/ZepLnwRRYwXW/W2yM2cQVi/J0EuXC5oX/PIu9SU262XQK1fR+sHvuxr+9nFpn2C/JX8NIs99kcE2ieAZ9Log9rpb4GCSGlH0WDMSoxSm4OSFK4IfU/0kAjXt0E8P5hYVgwq7VuDfNPQtWNjxcb8fXRWEoT8QFIypswHIZlPPgnjXDLpM8St27wcg4lDqYdohRXfpJTJQKfYv4+37aMKR7WxswbE/l4VBi46VeV9Rt6RxoR/QBvlHpxvhWQAmJG7WafwaU/VwHCU405rK8efKDRguZrzMaw1+SpONE3OliV9YloZuGwketMvbqxeJHzG7kd8H4oAJqlBTSEuQDk=) 2026-04-09 00:20:52.855555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID5PlSqt1oiIhJXPZGmF8RhhLXDb+XGzsVXG5COnLpEj) 2026-04-09 00:20:52.855566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFzdpD6IqcMHQd3jYoJSZvrvJMqOyg5wkEdbSz0PfudQukcbd84jX884kjErryXeBt1Fbv7T/R8/TK7ORqiJVq8=) 2026-04-09 00:20:52.855577 | orchestrator | 2026-04-09 00:20:52.855588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:20:52.855599 | orchestrator | Thursday 09 April 2026 00:20:52 +0000 (0:00:00.945) 0:00:11.335 ******** 2026-04-09 00:20:52.855616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZoTbnQTmDsM/tIvOyOk02ZYQGJJ5mEIPI6evvSoxLR) 2026-04-09 00:21:03.039378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgHU1J+POkwPhCtvQmPCRj5yf7/u4aa74M0RbU6GvZf8uVVEIfgpZSvt+gQg2itHWHKaEt9IM05dhsa6BOcAMEdyFjChFVbtieyBziXFKx8QNsSC0zju4foZpQMWizMBMz/xn1cSsnsLXKgqLu+rWafZnMf4lrOTrHm6nsBhsSKw3ceUGj6d1lsLG2rAXTDRwZicBtPlvBj7TUFOF5i11N13eoGJy6vmoEcgwWBEVxz6+ft7+wwG+Jsas0eq8F3PsirjAoyt86x8db9/M9KQotqhV3Tpigv7cY12X8ZQ+EvneqayYHM+yQQEgk9bIRVzFgVBL6bYcoqNSPBv2+cUFkgZ3cChzznxesa5PofIfGe3/jW5monxgH+ArdX4NP4jC1IAyMIpoS34d/DYhGV5kgArFm0ZoH+faQEVgP6H7YAgSwLbPMInnRwCnBR+Rh7GQFVHE7nQAZLvOknL8zgptHnZdSIYCNSKZktjHkXMxkjr/3gzjwoI9wwLAAMcRZOqs=) 2026-04-09 00:21:03.039538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI22NQbAzZh4aUx2NeRg97ClMHLNPQf1Ue7DZFMsiXjuD1938T60JeDsgzB+r+S1I9J5GRz7qtGndI3dMDOFdIs=) 2026-04-09 00:21:03.039559 | orchestrator | 2026-04-09 00:21:03.039574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:03.039636 | orchestrator | Thursday 09 April 2026 00:20:53 +0000 (0:00:00.917) 0:00:12.252 ******** 2026-04-09 00:21:03.039651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwquoSpMWicjh4ro5blIpXvtIt5dPXP2fxF5ro13bWLC9/Cz2/3cZCju4YkvY6jv/pA83VNFeGslTAyA0HpJVE=) 2026-04-09 00:21:03.039690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG/RQUYhXwWhlYHLgLTnk21vh5I0KpqT6m+Am3JA1z4V) 2026-04-09 00:21:03.039704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5njkHcifG1UVIWbzbeYw6hpAGwPvYg1am9ylJhwPKS1EYRbUDh5imeLIerZyMycmDjSyO2GYTV2wLNkrFoe/oXQ2721qUNmE4/OT/el/GTGKv9aZs+lppAe01BsdG3BR5R32kOyjDP+IdLzW/CYdAwbuKyEM3nEtYJkApFtdFdWWfp6xI5d4yg+1feeeROv4RjWoyyRmxrl7x/UwY+M7F4Ec+bzp4AHA6sFvE/zxFaUvkoeC7j1Q9UnOAMY7eJKojFtvyPXLXwx3q3GuCrBvnTRCN+E+DI+eJ+5aTte8HomLjBwTi8QNNJRqH7ywBrdK1ygrab7KwVk0aHcXpRJHDfQ3zWxY/+GMUj7g6AgRkmQtPE4uXGqFz3NJaCE3HFwUV6CtDb+80f7LcyPYxN1OxR2Yr/SVx9yvcjIu+7GgJb2UE6bQU9tSIVEI5/zvo90w2xKy7FdAQ77ZJNLMOkLwAl0IZO46qkkymZYXJfI6+Sj8xW6xT4QA5uR9MxTh2dW0=) 2026-04-09 00:21:03.039716 | orchestrator | 2026-04-09 00:21:03.039727 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-09 00:21:03.039739 | orchestrator | Thursday 09 April 2026 00:20:54 +0000 (0:00:00.925) 0:00:13.178 ******** 2026-04-09 00:21:03.039750 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-09 00:21:03.039762 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-09 00:21:03.039773 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-09 00:21:03.039783 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-09 00:21:03.039794 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-09 00:21:03.039804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-09 00:21:03.039815 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-09 00:21:03.039825 | orchestrator | 2026-04-09 00:21:03.039836 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-09 00:21:03.039849 | orchestrator | Thursday 09 April 2026 00:20:59 +0000 (0:00:05.034) 0:00:18.212 ******** 2026-04-09 00:21:03.039878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-09 00:21:03.039894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-09 00:21:03.039908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-09 00:21:03.039922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-09 00:21:03.039935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-09 00:21:03.039948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-09 00:21:03.039961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-09 00:21:03.039975 | orchestrator | 2026-04-09 00:21:03.040006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:03.040020 | orchestrator | Thursday 09 April 2026 00:20:59 +0000 (0:00:00.177) 0:00:18.389 ******** 2026-04-09 00:21:03.040034 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJdp1F1vzIzfoF2UbK71FZFCf184zInO5oINhY7xZTQE) 2026-04-09 00:21:03.040051 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLUIh1Y6hM8QQDqGPGW5fCkcBRqC9/XnxyVm6LHGhxy5YcgnS0g9oCf+HlJAQ12C9hDTA5MbR4Jw6H+EexhZTQLztwIZ8W+KTSXeIS6BqZ87xHq2fhosn8k27de2hNG7jpIiRY/AhzLrQlSJhoCQbElvU7Ygk2z+S+QqJb8wUdQ0R1ncTFC3jUus188P3IZohQQ1vrVG/8DZQUIacyGApXrEknrh3BOsBP3F7KBRXAb2Kzyzb7g69bn+J/EuBqKgY5wuPaw0cxsWfrOOUNMTBSfT/3SvYsJjqMSBnOJCPRkIFeDV+jNKfoJVdP0daZHfyjaLai90sISAXizM7NhxQpN5mBt0ePlKaW3kJDU/UVlex7otEUjHNF5tRbqgGK/2sYxiN58KZIIzwNc67W8wqvGkJKXXNBbGW3AyV59g3rXFYY1nQsHymCRHa8X2To/4RnbiMADA3oBRMGXHC5/qkARPjDlb3nfa4GtkvdoneMhPvFF+LKPAsnr+BQrwhzteU=) 2026-04-09 00:21:03.040123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI49iBECNSLlvaKbYiHMr9AbQtj0atC/wRmWev+F+dOjYzTLxp6rUjmR0hIGRPuBwVoJVQh1mLPxifO1dTKDPjY=) 2026-04-09 00:21:03.040145 | orchestrator | 2026-04-09 00:21:03.040164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:03.040179 | orchestrator | Thursday 09 April 2026 00:21:00 +0000 (0:00:00.974) 0:00:19.363 ******** 2026-04-09 00:21:03.040194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0zUDK+02bSXcltpuPKVdfkEGoarITlNH1Qjoc8tIAk1W1mlQzd4xSi/OUxQHinr7N6TbQ3+rPst210EIdh+S6+IXcuy1NTowjwQPfe9Vm1OX+lh9yVfdMZf0CVe4rEkHlMg2+bmqTvRK3ihl/kU/X0MwJ+gJlKY3z6A84iv4yG62RBHh2qGiHAyU1ZWUdlCoHYeTZ/xau3RUWQTPUETcDBurBwnQvKREqmpFYI4bDPnLh2XtsjN+1Hg9208R/G1N9eADh1p8+F+IE+8a2POSDW7doiZK7LlDKQXkoqAqB7TJ5EmbVioeetqWvMgMObFpe8O3ydWvFIeLD7zUz4te5aqwuCh6BOtshZwYIBMv3wSOTji0nA04uxEyRQ7TQGAaWCTQcxCvL8xvcREsI88+C8Tkm7NC8Njv/81cxUgZw1IcfYAeRRVRi+Ks+Y6a4z3aBqQSCVYqLEmEC4gsIG5pMzpRx8bhYtCYBIxoqrOjAtSnsul2fAafRfXtSOIs5yZk=) 2026-04-09 00:21:03.040208 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsV2ew6r0ZzAzyWa32X7J+WzafZFlf0htaBwVUBiNATSxD/aYf/PVMFqb3kjvutRxM3VnrBfsbKRPdiSQH51zM=) 2026-04-09 00:21:03.040222 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHDz22sNShhTx992U2Sj1tKmHWmXOGLaMFCp1F0sI5z2) 2026-04-09 00:21:03.040235 | orchestrator | 2026-04-09 00:21:03.040246 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:03.040257 | orchestrator | Thursday 09 April 2026 00:21:01 +0000 (0:00:00.934) 0:00:20.297 ******** 2026-04-09 00:21:03.040268 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDkpyyoEk0jauNnu/N58I1WK8InbMmdeRyJM2WGigluf1grc2/3nYj9o+4nkGtuwunrUVanGeT0tU9LzLSiZGV8w0IGHO3wW1Tfx1dzQMGb2jgTNfY3uYLhUd8F3YMoJmUUydhf362OrZnvuscqgK5ekQFhEg6Wme+jTnfGsuMKxGmX4qOWftOQf0JdhKXBL50xsAbEbsJUap6vTb5tiToWEZeeMfWHBFW9y4KJUHsllFUIrGPCRBmO23IRxlzm9Ffs3hg3u/UWMqaFvJIYiXt0rtz3D5h/No3CXufWI0fM5kelMIaPDp3hTi973xMqtQbTuGhJe+7SoQq21nl/2r8WXKVIcjXakrJid1XMv1clCulSAIeIhC9F0tv+AwfZishN1QYzWn4F0NMq842Io9lLGVgznsUCt/CAlkxNlo/kXWK7VaiYFTSJDP3IOuQw+vTAAWFFRSagjEHln+olPeXjyXtY8qjv6cMcwF2dFENs6MW5LcG98hCCDkLITo3Vh0=) 2026-04-09 00:21:03.040280 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGhbSAWUkOTMLOoJIlFpg3XcwGjJluys4jf42UK4MAuCCyuDkvp2LMu1bW0qyulGO9M/utGTTji+RemuSbbi7w=) 2026-04-09 00:21:03.040291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINoQ+QOp7IrsBgM43qwXS1yLI2gOd8llHjE+aBWWqoCT) 2026-04-09 00:21:03.040302 | orchestrator | 2026-04-09 00:21:03.040313 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:03.040323 | orchestrator | Thursday 09 April 2026 00:21:02 +0000 (0:00:00.928) 0:00:21.226 ******** 2026-04-09 00:21:03.040334 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEK1vy67zfxndalEsQTUD0XRSO28BB6B86DeNpd6RHb) 2026-04-09 00:21:03.040356 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtySujex6fUU9kG7uFJlMNLLOZUnltrl4BDvkfOjtUJKS1hv8m6TUH3SZtbVaYsCVMDTADu0Q8OJ2FDkoqpWnQc4uUt2FdhvMAI86UUi3OcJJIAwwR8u/rpcx2cRHnMIqvg5WjQUV4f9/sq8U0vcKPBTVbkJ9jJ/XbyRp7y9aNT2cuofk7Q8dnhrwq67EoYmQoZdM2487Ltcyhr/W/SyGNiTnjEpSpWhsQa6TA2HbAU2MXEMxyxVmQ8a4dCtoSz8x9uNyfUKfqxvScMg+dfhQnmUWMbGYnztKBpnFXCR6aduxDJczvWOu98QI2X8qUXQEPxBIfl+GnPs4SBHw4ZvR/3FBWScs01clS/Jz1uVnbzYDkyOtEGH7eaACTNABTuEVWB2r4W+FeVNJABtCcIfLK0nAbP6iS63eigtcYqv/wlOQGAEqLPZcTDBf2nN3/MAPloV0so/CPHjT9lLl0hqLe1powcRYOvd7wl3mYhw7TjAV3PUkl29ZJf56Bj9KCJy0=) 2026-04-09 00:21:07.016369 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDy9OwnZlvOg/BKQQ14G8WYK9KtTlBXTOEuvjbzb7NbMuXh66g+EEHzofdqCLlNnRw3h2ECcvrFniRQ6WSImNFg=) 2026-04-09 00:21:07.016480 | orchestrator | 2026-04-09 00:21:07.016498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:07.016512 | orchestrator | Thursday 09 April 2026 00:21:03 +0000 (0:00:00.926) 0:00:22.152 ******** 2026-04-09 00:21:07.016524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFzdpD6IqcMHQd3jYoJSZvrvJMqOyg5wkEdbSz0PfudQukcbd84jX884kjErryXeBt1Fbv7T/R8/TK7ORqiJVq8=) 2026-04-09 00:21:07.016536 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID5PlSqt1oiIhJXPZGmF8RhhLXDb+XGzsVXG5COnLpEj) 2026-04-09 00:21:07.016569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCimTgWjblS2BSqdQA8WqAw+eTnlgj4vi0mnZRKtW4bhWItFGI4f+v1UOaHVSA7qB5dUq3UZxIQce7KwsUVK9CgOHyzm44lGJkbyvF3aCpdgZoeKfpQa7645RJH1pTI3xlJlPmuZLCGQXdc47akTLJNdKO7ZLwKWJv3d11pCHBI2eor/ZepLnwRRYwXW/W2yM2cQVi/J0EuXC5oX/PIu9SU262XQK1fR+sHvuxr+9nFpn2C/JX8NIs99kcE2ieAZ9Log9rpb4GCSGlH0WDMSoxSm4OSFK4IfU/0kAjXt0E8P5hYVgwq7VuDfNPQtWNjxcb8fXRWEoT8QFIypswHIZlPPgnjXDLpM8St27wcg4lDqYdohRXfpJTJQKfYv4+37aMKR7WxswbE/l4VBi46VeV9Rt6RxoR/QBvlHpxvhWQAmJG7WafwaU/VwHCU405rK8efKDRguZrzMaw1+SpONE3OliV9YloZuGwketMvbqxeJHzG7kd8H4oAJqlBTSEuQDk=) 2026-04-09 00:21:07.016584 | orchestrator | 2026-04-09 00:21:07.016596 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:07.016607 | orchestrator | Thursday 09 April 2026 00:21:04 +0000 (0:00:00.988) 0:00:23.141 ******** 2026-04-09 00:21:07.016617 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI22NQbAzZh4aUx2NeRg97ClMHLNPQf1Ue7DZFMsiXjuD1938T60JeDsgzB+r+S1I9J5GRz7qtGndI3dMDOFdIs=) 2026-04-09 00:21:07.016634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgHU1J+POkwPhCtvQmPCRj5yf7/u4aa74M0RbU6GvZf8uVVEIfgpZSvt+gQg2itHWHKaEt9IM05dhsa6BOcAMEdyFjChFVbtieyBziXFKx8QNsSC0zju4foZpQMWizMBMz/xn1cSsnsLXKgqLu+rWafZnMf4lrOTrHm6nsBhsSKw3ceUGj6d1lsLG2rAXTDRwZicBtPlvBj7TUFOF5i11N13eoGJy6vmoEcgwWBEVxz6+ft7+wwG+Jsas0eq8F3PsirjAoyt86x8db9/M9KQotqhV3Tpigv7cY12X8ZQ+EvneqayYHM+yQQEgk9bIRVzFgVBL6bYcoqNSPBv2+cUFkgZ3cChzznxesa5PofIfGe3/jW5monxgH+ArdX4NP4jC1IAyMIpoS34d/DYhGV5kgArFm0ZoH+faQEVgP6H7YAgSwLbPMInnRwCnBR+Rh7GQFVHE7nQAZLvOknL8zgptHnZdSIYCNSKZktjHkXMxkjr/3gzjwoI9wwLAAMcRZOqs=) 2026-04-09 00:21:07.016647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOZoTbnQTmDsM/tIvOyOk02ZYQGJJ5mEIPI6evvSoxLR) 2026-04-09 00:21:07.016658 | orchestrator | 2026-04-09 00:21:07.016670 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-09 00:21:07.016681 | orchestrator | Thursday 09 April 2026 00:21:05 +0000 (0:00:00.911) 0:00:24.052 ******** 2026-04-09 00:21:07.016691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG/RQUYhXwWhlYHLgLTnk21vh5I0KpqT6m+Am3JA1z4V) 2026-04-09 00:21:07.016703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5njkHcifG1UVIWbzbeYw6hpAGwPvYg1am9ylJhwPKS1EYRbUDh5imeLIerZyMycmDjSyO2GYTV2wLNkrFoe/oXQ2721qUNmE4/OT/el/GTGKv9aZs+lppAe01BsdG3BR5R32kOyjDP+IdLzW/CYdAwbuKyEM3nEtYJkApFtdFdWWfp6xI5d4yg+1feeeROv4RjWoyyRmxrl7x/UwY+M7F4Ec+bzp4AHA6sFvE/zxFaUvkoeC7j1Q9UnOAMY7eJKojFtvyPXLXwx3q3GuCrBvnTRCN+E+DI+eJ+5aTte8HomLjBwTi8QNNJRqH7ywBrdK1ygrab7KwVk0aHcXpRJHDfQ3zWxY/+GMUj7g6AgRkmQtPE4uXGqFz3NJaCE3HFwUV6CtDb+80f7LcyPYxN1OxR2Yr/SVx9yvcjIu+7GgJb2UE6bQU9tSIVEI5/zvo90w2xKy7FdAQ77ZJNLMOkLwAl0IZO46qkkymZYXJfI6+Sj8xW6xT4QA5uR9MxTh2dW0=) 2026-04-09 00:21:07.016736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwquoSpMWicjh4ro5blIpXvtIt5dPXP2fxF5ro13bWLC9/Cz2/3cZCju4YkvY6jv/pA83VNFeGslTAyA0HpJVE=) 2026-04-09 00:21:07.016748 | orchestrator | 2026-04-09 00:21:07.016759 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-09 00:21:07.016769 | orchestrator | Thursday 09 April 2026 00:21:06 +0000 (0:00:00.935) 0:00:24.988 ******** 2026-04-09 00:21:07.016781 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:21:07.016792 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:21:07.016803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:21:07.016813 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:21:07.016841 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:21:07.016852 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:21:07.016863 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:21:07.016874 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:07.016885 | orchestrator | 2026-04-09 00:21:07.016896 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-09 00:21:07.016910 | orchestrator | Thursday 09 April 2026 00:21:06 +0000 (0:00:00.169) 0:00:25.158 ******** 2026-04-09 00:21:07.016922 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:07.016936 | orchestrator | 2026-04-09 00:21:07.016949 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-09 00:21:07.016962 | orchestrator | Thursday 09 April 2026 00:21:06 +0000 (0:00:00.047) 0:00:25.205 ******** 2026-04-09 00:21:07.016975 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:21:07.016988 | orchestrator | 2026-04-09 00:21:07.017001 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-09 00:21:07.017014 | orchestrator | Thursday 09 April 2026 00:21:06 +0000 (0:00:00.044) 0:00:25.250 ******** 2026-04-09 00:21:07.017027 | orchestrator | changed: [testbed-manager] 2026-04-09 00:21:07.017039 | orchestrator | 2026-04-09 00:21:07.017112 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:21:07.017129 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-09 00:21:07.017144 | orchestrator | 2026-04-09 00:21:07.017157 | orchestrator | 2026-04-09 00:21:07.017170 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:21:07.017183 | orchestrator | Thursday 09 April 2026 00:21:06 +0000 (0:00:00.447) 0:00:25.697 ******** 2026-04-09 00:21:07.017196 | orchestrator | =============================================================================== 2026-04-09 00:21:07.017209 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.17s 2026-04-09 00:21:07.017222 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.03s 2026-04-09 00:21:07.017235 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-09 00:21:07.017248 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-09 00:21:07.017262 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-04-09 00:21:07.017273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-09 00:21:07.017283 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-09 00:21:07.017294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-04-09 00:21:07.017305 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-09 00:21:07.017324 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-09 00:21:07.017335 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-09 00:21:07.017346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-09 00:21:07.017357 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-04-09 00:21:07.017367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-04-09 00:21:07.017378 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-04-09 00:21:07.017388 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-04-09 00:21:07.017405 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.45s 2026-04-09 00:21:07.017424 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-09 00:21:07.017443 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-04-09 00:21:07.017461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-04-09 00:21:07.130350 | orchestrator | + osism apply squid 2026-04-09 00:21:18.234937 | orchestrator | 2026-04-09 00:21:18 | INFO  | Prepare task for execution of squid. 2026-04-09 00:21:18.310774 | orchestrator | 2026-04-09 00:21:18 | INFO  | Task 84204ebd-0d67-4a3a-86ff-59590fc8def9 (squid) was prepared for execution. 2026-04-09 00:21:18.310864 | orchestrator | 2026-04-09 00:21:18 | INFO  | It takes a moment until task 84204ebd-0d67-4a3a-86ff-59590fc8def9 (squid) has been started and output is visible here. 2026-04-09 00:23:19.159262 | orchestrator | 2026-04-09 00:23:19.159384 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-09 00:23:19.159402 | orchestrator | 2026-04-09 00:23:19.159414 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-09 00:23:19.159426 | orchestrator | Thursday 09 April 2026 00:21:21 +0000 (0:00:00.200) 0:00:00.200 ******** 2026-04-09 00:23:19.159438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:23:19.159451 | orchestrator | 2026-04-09 00:23:19.159462 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-09 00:23:19.159473 | orchestrator | Thursday 09 April 2026 00:21:21 +0000 (0:00:00.077) 0:00:00.277 ******** 2026-04-09 00:23:19.159484 | orchestrator | ok: [testbed-manager] 2026-04-09 00:23:19.159495 | orchestrator | 2026-04-09 00:23:19.159506 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-09 00:23:19.159517 | orchestrator | Thursday 09 April 2026 00:21:23 +0000 (0:00:02.325) 0:00:02.603 ******** 2026-04-09 00:23:19.159529 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-09 00:23:19.159540 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-09 00:23:19.159550 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-09 00:23:19.159562 | orchestrator | 2026-04-09 00:23:19.159595 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-09 00:23:19.159607 | orchestrator | Thursday 09 April 2026 00:21:25 +0000 (0:00:01.247) 0:00:03.850 ******** 2026-04-09 00:23:19.159618 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-09 00:23:19.159629 | orchestrator | 2026-04-09 00:23:19.159640 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-09 00:23:19.159650 | orchestrator | Thursday 09 April 2026 00:21:26 +0000 (0:00:01.061) 0:00:04.911 ******** 2026-04-09 00:23:19.159662 | orchestrator | ok: [testbed-manager] 2026-04-09 00:23:19.159674 | orchestrator | 2026-04-09 00:23:19.159685 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-09 00:23:19.159696 | orchestrator | Thursday 09 April 2026 00:21:26 +0000 (0:00:00.362) 0:00:05.274 ******** 2026-04-09 00:23:19.159728 | orchestrator | changed: [testbed-manager] 2026-04-09 00:23:19.159745 | orchestrator | 2026-04-09 00:23:19.159773 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-09 00:23:19.159796 | orchestrator | Thursday 09 April 2026 00:21:27 +0000 (0:00:00.866) 0:00:06.140 ******** 2026-04-09 00:23:19.159811 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-09 00:23:19.159824 | orchestrator | ok: [testbed-manager] 2026-04-09 00:23:19.159837 | orchestrator | 2026-04-09 00:23:19.159849 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-09 00:23:19.159863 | orchestrator | Thursday 09 April 2026 00:22:06 +0000 (0:00:38.883) 0:00:45.023 ******** 2026-04-09 00:23:19.159875 | orchestrator | changed: [testbed-manager] 2026-04-09 00:23:19.159888 | orchestrator | 2026-04-09 00:23:19.159901 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-09 00:23:19.159914 | orchestrator | Thursday 09 April 2026 00:22:18 +0000 (0:00:11.995) 0:00:57.019 ******** 2026-04-09 00:23:19.159927 | orchestrator | Pausing for 60 seconds 2026-04-09 00:23:19.159940 | orchestrator | changed: [testbed-manager] 2026-04-09 00:23:19.159952 | orchestrator | 2026-04-09 00:23:19.159965 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-09 00:23:19.159977 | orchestrator | Thursday 09 April 2026 00:23:18 +0000 (0:01:00.069) 0:01:57.088 ******** 2026-04-09 00:23:19.159990 | orchestrator | ok: [testbed-manager] 2026-04-09 00:23:19.160003 | orchestrator | 2026-04-09 00:23:19.160016 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-09 00:23:19.160061 | orchestrator | Thursday 09 April 2026 00:23:18 +0000 (0:00:00.058) 0:01:57.147 ******** 2026-04-09 00:23:19.160072 | orchestrator | changed: [testbed-manager] 2026-04-09 00:23:19.160083 | orchestrator | 2026-04-09 00:23:19.160099 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:23:19.160111 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:23:19.160122 | orchestrator | 2026-04-09 00:23:19.160133 | orchestrator | 2026-04-09 00:23:19.160144 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:23:19.160155 | orchestrator | Thursday 09 April 2026 00:23:18 +0000 (0:00:00.580) 0:01:57.727 ******** 2026-04-09 00:23:19.160166 | orchestrator | =============================================================================== 2026-04-09 00:23:19.160176 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-04-09 00:23:19.160187 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.88s 2026-04-09 00:23:19.160198 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-04-09 00:23:19.160209 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.33s 2026-04-09 00:23:19.160220 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.25s 2026-04-09 00:23:19.160230 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2026-04-09 00:23:19.160241 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2026-04-09 00:23:19.160252 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-04-09 00:23:19.160263 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-04-09 00:23:19.160274 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-09 00:23:19.160285 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-09 00:23:19.323955 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-09 00:23:19.324119 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-09 00:23:19.401620 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:23:19.401671 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/ 2026-04-09 00:23:19.409647 | orchestrator | + set -e 2026-04-09 00:23:19.409684 | orchestrator | + NAMESPACE=kolla/release/ 2026-04-09 00:23:19.409717 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-09 00:23:19.416766 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-09 00:23:19.480582 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-09 00:23:19.481331 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-09 00:23:30.771430 | orchestrator | 2026-04-09 00:23:30 | INFO  | Prepare task for execution of operator. 2026-04-09 00:23:30.841801 | orchestrator | 2026-04-09 00:23:30 | INFO  | Task 173356f0-18fc-4bef-8fa1-50b5827f9116 (operator) was prepared for execution. 2026-04-09 00:23:30.841891 | orchestrator | 2026-04-09 00:23:30 | INFO  | It takes a moment until task 173356f0-18fc-4bef-8fa1-50b5827f9116 (operator) has been started and output is visible here. 2026-04-09 00:23:45.796544 | orchestrator | 2026-04-09 00:23:45.796658 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-09 00:23:45.796675 | orchestrator | 2026-04-09 00:23:45.796687 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-09 00:23:45.796698 | orchestrator | Thursday 09 April 2026 00:23:33 +0000 (0:00:00.178) 0:00:00.178 ******** 2026-04-09 00:23:45.796709 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:23:45.796722 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:23:45.796733 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:23:45.796744 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:23:45.796754 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:23:45.796765 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:23:45.796776 | orchestrator | 2026-04-09 00:23:45.796787 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-09 00:23:45.796798 | orchestrator | Thursday 09 April 2026 00:23:37 +0000 (0:00:03.449) 0:00:03.627 ******** 2026-04-09 00:23:45.796809 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:23:45.796820 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:23:45.796831 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:23:45.796842 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:23:45.796852 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:23:45.796863 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:23:45.796874 | orchestrator | 2026-04-09 00:23:45.796885 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-09 00:23:45.796896 | orchestrator | 2026-04-09 00:23:45.796906 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-09 00:23:45.796918 | orchestrator | Thursday 09 April 2026 00:23:38 +0000 (0:00:00.799) 0:00:04.426 ******** 2026-04-09 00:23:45.796930 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:23:45.796941 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:23:45.796951 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:23:45.796962 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:23:45.796973 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:23:45.796984 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:23:45.796995 | orchestrator | 2026-04-09 00:23:45.797006 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-09 00:23:45.797078 | orchestrator | Thursday 09 April 2026 00:23:38 +0000 (0:00:00.143) 0:00:04.570 ******** 2026-04-09 00:23:45.797093 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:23:45.797106 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:23:45.797118 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:23:45.797131 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:23:45.797144 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:23:45.797156 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:23:45.797169 | orchestrator | 2026-04-09 00:23:45.797183 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-09 00:23:45.797196 | orchestrator | Thursday 09 April 2026 00:23:38 +0000 (0:00:00.160) 0:00:04.730 ******** 2026-04-09 00:23:45.797209 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:45.797222 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:45.797235 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:45.797248 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:45.797284 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:45.797297 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:45.797309 | orchestrator | 2026-04-09 00:23:45.797322 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-09 00:23:45.797334 | orchestrator | Thursday 09 April 2026 00:23:39 +0000 (0:00:00.650) 0:00:05.381 ******** 2026-04-09 00:23:45.797348 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:45.797361 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:45.797373 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:45.797385 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:45.797398 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:45.797410 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:45.797423 | orchestrator | 2026-04-09 00:23:45.797436 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-09 00:23:45.797447 | orchestrator | Thursday 09 April 2026 00:23:39 +0000 (0:00:00.844) 0:00:06.226 ******** 2026-04-09 00:23:45.797458 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-09 00:23:45.797469 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-09 00:23:45.797480 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-09 00:23:45.797490 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-09 00:23:45.797501 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-09 00:23:45.797512 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-09 00:23:45.797523 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-09 00:23:45.797533 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-09 00:23:45.797544 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-09 00:23:45.797555 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-09 00:23:45.797565 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-09 00:23:45.797576 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-09 00:23:45.797586 | orchestrator | 2026-04-09 00:23:45.797597 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-09 00:23:45.797608 | orchestrator | Thursday 09 April 2026 00:23:41 +0000 (0:00:01.175) 0:00:07.401 ******** 2026-04-09 00:23:45.797619 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:45.797629 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:45.797640 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:45.797651 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:45.797669 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:45.797688 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:45.797707 | orchestrator | 2026-04-09 00:23:45.797726 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-09 00:23:45.797745 | orchestrator | Thursday 09 April 2026 00:23:42 +0000 (0:00:01.257) 0:00:08.659 ******** 2026-04-09 00:23:45.797763 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797781 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797800 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797819 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797839 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797873 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-09 00:23:45.797884 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797912 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797924 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797934 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797945 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797956 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-09 00:23:45.797966 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.797987 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-09 00:23:45.797998 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-09 00:23:45.798009 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-09 00:23:45.798106 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.798118 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.798129 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.798139 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.798150 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-09 00:23:45.798161 | orchestrator | 2026-04-09 00:23:45.798171 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-09 00:23:45.798183 | orchestrator | Thursday 09 April 2026 00:23:43 +0000 (0:00:01.284) 0:00:09.943 ******** 2026-04-09 00:23:45.798194 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:45.798205 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:45.798216 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:45.798227 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:45.798238 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:45.798248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:45.798259 | orchestrator | 2026-04-09 00:23:45.798270 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-09 00:23:45.798280 | orchestrator | Thursday 09 April 2026 00:23:43 +0000 (0:00:00.142) 0:00:10.085 ******** 2026-04-09 00:23:45.798291 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:45.798302 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:45.798312 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:45.798323 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:45.798334 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:45.798350 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:45.798368 | orchestrator | 2026-04-09 00:23:45.798396 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-09 00:23:45.798416 | orchestrator | Thursday 09 April 2026 00:23:43 +0000 (0:00:00.158) 0:00:10.244 ******** 2026-04-09 00:23:45.798435 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:45.798453 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:45.798473 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:45.798491 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:45.798510 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:45.798529 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:45.798547 | orchestrator | 2026-04-09 00:23:45.798567 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-09 00:23:45.798579 | orchestrator | Thursday 09 April 2026 00:23:44 +0000 (0:00:00.525) 0:00:10.769 ******** 2026-04-09 00:23:45.798590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:45.798601 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:45.798611 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:45.798622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:45.798632 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:45.798643 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:45.798653 | orchestrator | 2026-04-09 00:23:45.798664 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-09 00:23:45.798675 | orchestrator | Thursday 09 April 2026 00:23:44 +0000 (0:00:00.159) 0:00:10.929 ******** 2026-04-09 00:23:45.798685 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:23:45.798696 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:45.798706 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:23:45.798717 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:45.798728 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:23:45.798748 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:23:45.798759 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:45.798769 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:45.798780 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:23:45.798790 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:45.798801 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:23:45.798811 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:45.798822 | orchestrator | 2026-04-09 00:23:45.798832 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-09 00:23:45.798843 | orchestrator | Thursday 09 April 2026 00:23:45 +0000 (0:00:00.882) 0:00:11.811 ******** 2026-04-09 00:23:45.798854 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:45.798865 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:45.798875 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:45.798886 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:45.798896 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:45.798907 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:45.798917 | orchestrator | 2026-04-09 00:23:45.798928 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-09 00:23:45.798939 | orchestrator | Thursday 09 April 2026 00:23:45 +0000 (0:00:00.147) 0:00:11.959 ******** 2026-04-09 00:23:45.798950 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:45.798960 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:45.798971 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:45.798981 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:45.799003 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:46.949246 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:46.949371 | orchestrator | 2026-04-09 00:23:46.949395 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-09 00:23:46.949415 | orchestrator | Thursday 09 April 2026 00:23:45 +0000 (0:00:00.135) 0:00:12.094 ******** 2026-04-09 00:23:46.949434 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:46.949452 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:46.949470 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:46.949488 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:46.949505 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:46.949521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:46.949539 | orchestrator | 2026-04-09 00:23:46.949558 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-09 00:23:46.949575 | orchestrator | Thursday 09 April 2026 00:23:45 +0000 (0:00:00.127) 0:00:12.222 ******** 2026-04-09 00:23:46.949590 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:23:46.949606 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:23:46.949621 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:23:46.949637 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:23:46.949653 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:23:46.949670 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:23:46.949688 | orchestrator | 2026-04-09 00:23:46.949706 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-09 00:23:46.949725 | orchestrator | Thursday 09 April 2026 00:23:46 +0000 (0:00:00.637) 0:00:12.860 ******** 2026-04-09 00:23:46.949742 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:23:46.949758 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:23:46.949775 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:23:46.949793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:23:46.949811 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:23:46.949830 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:23:46.949849 | orchestrator | 2026-04-09 00:23:46.949869 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:23:46.949889 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.949948 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.949967 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.949988 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.950008 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.950193 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-09 00:23:46.950215 | orchestrator | 2026-04-09 00:23:46.950233 | orchestrator | 2026-04-09 00:23:46.950253 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:23:46.950272 | orchestrator | Thursday 09 April 2026 00:23:46 +0000 (0:00:00.202) 0:00:13.062 ******** 2026-04-09 00:23:46.950291 | orchestrator | =============================================================================== 2026-04-09 00:23:46.950309 | orchestrator | Gathering Facts --------------------------------------------------------- 3.45s 2026-04-09 00:23:46.950329 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-04-09 00:23:46.950347 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-04-09 00:23:46.950366 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-04-09 00:23:46.950383 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.88s 2026-04-09 00:23:46.950400 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-04-09 00:23:46.950418 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2026-04-09 00:23:46.950435 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-04-09 00:23:46.950455 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2026-04-09 00:23:46.950474 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.53s 2026-04-09 00:23:46.950491 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-04-09 00:23:46.950509 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-04-09 00:23:46.950527 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-09 00:23:46.950546 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-09 00:23:46.950566 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-09 00:23:46.950584 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-04-09 00:23:46.950603 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-09 00:23:46.950623 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-04-09 00:23:46.950642 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-09 00:23:47.108256 | orchestrator | + osism apply --environment custom facts 2026-04-09 00:23:48.306163 | orchestrator | 2026-04-09 00:23:48 | INFO  | Trying to run play facts in environment custom 2026-04-09 00:23:58.424821 | orchestrator | 2026-04-09 00:23:58 | INFO  | Prepare task for execution of facts. 2026-04-09 00:23:58.499875 | orchestrator | 2026-04-09 00:23:58 | INFO  | Task 59b2bf95-477b-4101-995e-c6e5c258d04c (facts) was prepared for execution. 2026-04-09 00:23:58.499967 | orchestrator | 2026-04-09 00:23:58 | INFO  | It takes a moment until task 59b2bf95-477b-4101-995e-c6e5c258d04c (facts) has been started and output is visible here. 2026-04-09 00:24:40.872515 | orchestrator | 2026-04-09 00:24:40.872629 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-09 00:24:40.872647 | orchestrator | 2026-04-09 00:24:40.872659 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:24:40.872672 | orchestrator | Thursday 09 April 2026 00:24:01 +0000 (0:00:00.111) 0:00:00.111 ******** 2026-04-09 00:24:40.872683 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.872697 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:24:40.872708 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.872719 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:24:40.872730 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:24:40.872804 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.872819 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:40.872840 | orchestrator | 2026-04-09 00:24:40.872858 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-09 00:24:40.872884 | orchestrator | Thursday 09 April 2026 00:24:02 +0000 (0:00:01.365) 0:00:01.476 ******** 2026-04-09 00:24:40.872910 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:40.872928 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.872946 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:24:40.872964 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:24:40.872982 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:24:40.873025 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.873044 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.873064 | orchestrator | 2026-04-09 00:24:40.873083 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-09 00:24:40.873102 | orchestrator | 2026-04-09 00:24:40.873121 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:24:40.873141 | orchestrator | Thursday 09 April 2026 00:24:04 +0000 (0:00:01.184) 0:00:02.660 ******** 2026-04-09 00:24:40.873157 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.873170 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.873183 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.873196 | orchestrator | 2026-04-09 00:24:40.873210 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:24:40.873231 | orchestrator | Thursday 09 April 2026 00:24:04 +0000 (0:00:00.080) 0:00:02.741 ******** 2026-04-09 00:24:40.873244 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.873258 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.873270 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.873283 | orchestrator | 2026-04-09 00:24:40.873296 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:24:40.873309 | orchestrator | Thursday 09 April 2026 00:24:04 +0000 (0:00:00.172) 0:00:02.914 ******** 2026-04-09 00:24:40.873322 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.873334 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.873347 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.873359 | orchestrator | 2026-04-09 00:24:40.873373 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:24:40.873387 | orchestrator | Thursday 09 April 2026 00:24:04 +0000 (0:00:00.173) 0:00:03.087 ******** 2026-04-09 00:24:40.873400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:24:40.873414 | orchestrator | 2026-04-09 00:24:40.873427 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:24:40.873439 | orchestrator | Thursday 09 April 2026 00:24:04 +0000 (0:00:00.109) 0:00:03.197 ******** 2026-04-09 00:24:40.873450 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.873461 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.873471 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.873482 | orchestrator | 2026-04-09 00:24:40.873493 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:24:40.873527 | orchestrator | Thursday 09 April 2026 00:24:05 +0000 (0:00:00.406) 0:00:03.604 ******** 2026-04-09 00:24:40.873538 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:24:40.873550 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:24:40.873560 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:24:40.873571 | orchestrator | 2026-04-09 00:24:40.873582 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:24:40.873593 | orchestrator | Thursday 09 April 2026 00:24:05 +0000 (0:00:00.099) 0:00:03.703 ******** 2026-04-09 00:24:40.873603 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.873614 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.873625 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.873636 | orchestrator | 2026-04-09 00:24:40.873646 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:24:40.873657 | orchestrator | Thursday 09 April 2026 00:24:06 +0000 (0:00:00.973) 0:00:04.677 ******** 2026-04-09 00:24:40.873668 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.873679 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.873689 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.873700 | orchestrator | 2026-04-09 00:24:40.873711 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:24:40.873722 | orchestrator | Thursday 09 April 2026 00:24:06 +0000 (0:00:00.563) 0:00:05.241 ******** 2026-04-09 00:24:40.873732 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.873743 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.873754 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.873764 | orchestrator | 2026-04-09 00:24:40.873776 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:24:40.873786 | orchestrator | Thursday 09 April 2026 00:24:07 +0000 (0:00:01.049) 0:00:06.290 ******** 2026-04-09 00:24:40.873797 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.873808 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.873819 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.873829 | orchestrator | 2026-04-09 00:24:40.873840 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-09 00:24:40.873851 | orchestrator | Thursday 09 April 2026 00:24:24 +0000 (0:00:16.646) 0:00:22.936 ******** 2026-04-09 00:24:40.873861 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:24:40.873872 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:24:40.873883 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:24:40.873894 | orchestrator | 2026-04-09 00:24:40.873905 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-09 00:24:40.873936 | orchestrator | Thursday 09 April 2026 00:24:24 +0000 (0:00:00.081) 0:00:23.018 ******** 2026-04-09 00:24:40.873948 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:24:40.873959 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:24:40.873969 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:24:40.873980 | orchestrator | 2026-04-09 00:24:40.873991 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-09 00:24:40.874085 | orchestrator | Thursday 09 April 2026 00:24:32 +0000 (0:00:07.654) 0:00:30.673 ******** 2026-04-09 00:24:40.874098 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.874109 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.874120 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.874130 | orchestrator | 2026-04-09 00:24:40.874141 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-09 00:24:40.874152 | orchestrator | Thursday 09 April 2026 00:24:32 +0000 (0:00:00.426) 0:00:31.099 ******** 2026-04-09 00:24:40.874163 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-09 00:24:40.874174 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-09 00:24:40.874185 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-09 00:24:40.874195 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-09 00:24:40.874216 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-09 00:24:40.874227 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-09 00:24:40.874237 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-09 00:24:40.874248 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-09 00:24:40.874259 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-09 00:24:40.874269 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:24:40.874280 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:24:40.874296 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-09 00:24:40.874307 | orchestrator | 2026-04-09 00:24:40.874318 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:24:40.874329 | orchestrator | Thursday 09 April 2026 00:24:35 +0000 (0:00:03.454) 0:00:34.554 ******** 2026-04-09 00:24:40.874339 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.874350 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.874361 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.874372 | orchestrator | 2026-04-09 00:24:40.874383 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:24:40.874394 | orchestrator | 2026-04-09 00:24:40.874404 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:24:40.874415 | orchestrator | Thursday 09 April 2026 00:24:37 +0000 (0:00:01.189) 0:00:35.744 ******** 2026-04-09 00:24:40.874426 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:24:40.874437 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:24:40.874448 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:24:40.874458 | orchestrator | ok: [testbed-manager] 2026-04-09 00:24:40.874469 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:24:40.874480 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:24:40.874490 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:24:40.874501 | orchestrator | 2026-04-09 00:24:40.874512 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:24:40.874523 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:24:40.874535 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:24:40.874547 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:24:40.874558 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:24:40.874569 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:24:40.874580 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:24:40.874591 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:24:40.874601 | orchestrator | 2026-04-09 00:24:40.874612 | orchestrator | 2026-04-09 00:24:40.874623 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:24:40.874634 | orchestrator | Thursday 09 April 2026 00:24:40 +0000 (0:00:03.708) 0:00:39.452 ******** 2026-04-09 00:24:40.874645 | orchestrator | =============================================================================== 2026-04-09 00:24:40.874655 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.65s 2026-04-09 00:24:40.874666 | orchestrator | Install required packages (Debian) -------------------------------------- 7.65s 2026-04-09 00:24:40.874684 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2026-04-09 00:24:40.874695 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2026-04-09 00:24:40.874705 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2026-04-09 00:24:40.874716 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2026-04-09 00:24:40.874734 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-04-09 00:24:41.048164 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2026-04-09 00:24:41.048263 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2026-04-09 00:24:41.048278 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.56s 2026-04-09 00:24:41.048289 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-09 00:24:41.048300 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-04-09 00:24:41.048310 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2026-04-09 00:24:41.048321 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-04-09 00:24:41.048332 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-04-09 00:24:41.048344 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-04-09 00:24:41.048355 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-09 00:24:41.048366 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-04-09 00:24:41.218155 | orchestrator | + osism apply bootstrap 2026-04-09 00:24:52.477266 | orchestrator | 2026-04-09 00:24:52 | INFO  | Prepare task for execution of bootstrap. 2026-04-09 00:24:52.549124 | orchestrator | 2026-04-09 00:24:52 | INFO  | Task 74857a83-ef2a-450e-8f20-6ad0f0e81e7d (bootstrap) was prepared for execution. 2026-04-09 00:24:52.549214 | orchestrator | 2026-04-09 00:24:52 | INFO  | It takes a moment until task 74857a83-ef2a-450e-8f20-6ad0f0e81e7d (bootstrap) has been started and output is visible here. 2026-04-09 00:25:08.101773 | orchestrator | 2026-04-09 00:25:08.101870 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:25:08.101888 | orchestrator | 2026-04-09 00:25:08.101903 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-09 00:25:08.101914 | orchestrator | Thursday 09 April 2026 00:24:55 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-04-09 00:25:08.101926 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:08.101939 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:08.101949 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:08.101960 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:08.101971 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:08.101981 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:08.102098 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:08.102116 | orchestrator | 2026-04-09 00:25:08.102127 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:25:08.102138 | orchestrator | 2026-04-09 00:25:08.102149 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:25:08.102160 | orchestrator | Thursday 09 April 2026 00:24:56 +0000 (0:00:00.290) 0:00:00.474 ******** 2026-04-09 00:25:08.102171 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:08.102182 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:08.102193 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:08.102204 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:08.102214 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:08.102225 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:08.102236 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:08.102246 | orchestrator | 2026-04-09 00:25:08.102257 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-09 00:25:08.102287 | orchestrator | 2026-04-09 00:25:08.102308 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:25:08.102319 | orchestrator | Thursday 09 April 2026 00:25:01 +0000 (0:00:05.049) 0:00:05.523 ******** 2026-04-09 00:25:08.102331 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-09 00:25:08.102342 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:25:08.102353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-09 00:25:08.102363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-09 00:25:08.102374 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-09 00:25:08.102385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:25:08.102396 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:25:08.102406 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-09 00:25:08.102418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:25:08.102429 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-09 00:25:08.102439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-09 00:25:08.102450 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-09 00:25:08.102461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-09 00:25:08.102472 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:08.102483 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-09 00:25:08.102493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-09 00:25:08.102504 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-09 00:25:08.102515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:25:08.102526 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:08.102536 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-09 00:25:08.102547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:25:08.102558 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:25:08.102568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:25:08.102579 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:25:08.102590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:25:08.102601 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-09 00:25:08.102611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-09 00:25:08.102622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-09 00:25:08.102633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:25:08.102644 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:25:08.102654 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:25:08.102665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-09 00:25:08.102676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-09 00:25:08.102686 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:25:08.102697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:25:08.102708 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-09 00:25:08.102718 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:25:08.102729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:25:08.102740 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:08.102750 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-09 00:25:08.102761 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-09 00:25:08.102772 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-09 00:25:08.102789 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:25:08.102800 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:08.102811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:25:08.102822 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-09 00:25:08.102849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:25:08.102865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:25:08.102876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-09 00:25:08.102886 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:08.102897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-09 00:25:08.102908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:25:08.102918 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:08.102929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-09 00:25:08.102940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-09 00:25:08.102951 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:08.102961 | orchestrator | 2026-04-09 00:25:08.102972 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-09 00:25:08.102983 | orchestrator | 2026-04-09 00:25:08.103061 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-09 00:25:08.103076 | orchestrator | Thursday 09 April 2026 00:25:01 +0000 (0:00:00.440) 0:00:05.963 ******** 2026-04-09 00:25:08.103087 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:08.103097 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:08.103107 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:08.103117 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:08.103126 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:08.103136 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:08.103145 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:08.103155 | orchestrator | 2026-04-09 00:25:08.103164 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-09 00:25:08.103174 | orchestrator | Thursday 09 April 2026 00:25:02 +0000 (0:00:01.192) 0:00:07.156 ******** 2026-04-09 00:25:08.103184 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:08.103193 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:08.103203 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:08.103212 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:08.103222 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:08.103231 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:08.103241 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:08.103250 | orchestrator | 2026-04-09 00:25:08.103260 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-09 00:25:08.103269 | orchestrator | Thursday 09 April 2026 00:25:03 +0000 (0:00:01.144) 0:00:08.300 ******** 2026-04-09 00:25:08.103280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:08.103291 | orchestrator | 2026-04-09 00:25:08.103301 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-09 00:25:08.103311 | orchestrator | Thursday 09 April 2026 00:25:04 +0000 (0:00:00.263) 0:00:08.564 ******** 2026-04-09 00:25:08.103320 | orchestrator | changed: [testbed-manager] 2026-04-09 00:25:08.103330 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:08.103339 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:08.103349 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:08.103358 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:08.103368 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:08.103377 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:08.103387 | orchestrator | 2026-04-09 00:25:08.103397 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-09 00:25:08.103406 | orchestrator | Thursday 09 April 2026 00:25:05 +0000 (0:00:01.480) 0:00:10.045 ******** 2026-04-09 00:25:08.103422 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:08.103433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:08.103444 | orchestrator | 2026-04-09 00:25:08.103454 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-09 00:25:08.103464 | orchestrator | Thursday 09 April 2026 00:25:05 +0000 (0:00:00.257) 0:00:10.302 ******** 2026-04-09 00:25:08.103473 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:08.103483 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:08.103492 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:08.103502 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:08.103511 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:08.103521 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:08.103530 | orchestrator | 2026-04-09 00:25:08.103539 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-09 00:25:08.103549 | orchestrator | Thursday 09 April 2026 00:25:06 +0000 (0:00:00.999) 0:00:11.302 ******** 2026-04-09 00:25:08.103559 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:08.103568 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:08.103578 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:08.103646 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:08.103657 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:08.103666 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:08.103676 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:08.103686 | orchestrator | 2026-04-09 00:25:08.103695 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-09 00:25:08.103705 | orchestrator | Thursday 09 April 2026 00:25:07 +0000 (0:00:00.711) 0:00:12.013 ******** 2026-04-09 00:25:08.103714 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:08.103724 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:08.103733 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:08.103743 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:08.103752 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:08.103762 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:08.103771 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:08.103781 | orchestrator | 2026-04-09 00:25:08.103791 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-09 00:25:08.103801 | orchestrator | Thursday 09 April 2026 00:25:07 +0000 (0:00:00.407) 0:00:12.421 ******** 2026-04-09 00:25:08.103811 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:08.103820 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:08.103843 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:20.039816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:20.039922 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:20.039936 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:20.039944 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:20.039953 | orchestrator | 2026-04-09 00:25:20.039962 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-09 00:25:20.039973 | orchestrator | Thursday 09 April 2026 00:25:08 +0000 (0:00:00.211) 0:00:12.632 ******** 2026-04-09 00:25:20.039983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:20.040040 | orchestrator | 2026-04-09 00:25:20.040049 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-09 00:25:20.040058 | orchestrator | Thursday 09 April 2026 00:25:08 +0000 (0:00:00.273) 0:00:12.905 ******** 2026-04-09 00:25:20.040066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:20.040094 | orchestrator | 2026-04-09 00:25:20.040103 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-09 00:25:20.040111 | orchestrator | Thursday 09 April 2026 00:25:08 +0000 (0:00:00.289) 0:00:13.195 ******** 2026-04-09 00:25:20.040119 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.040128 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.040136 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.040143 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.040151 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.040159 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.040166 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.040174 | orchestrator | 2026-04-09 00:25:20.040182 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-09 00:25:20.040189 | orchestrator | Thursday 09 April 2026 00:25:09 +0000 (0:00:01.062) 0:00:14.257 ******** 2026-04-09 00:25:20.040197 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:20.040205 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:20.040213 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:20.040220 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:20.040228 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:20.040235 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:20.040243 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:20.040251 | orchestrator | 2026-04-09 00:25:20.040259 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-09 00:25:20.040266 | orchestrator | Thursday 09 April 2026 00:25:09 +0000 (0:00:00.197) 0:00:14.455 ******** 2026-04-09 00:25:20.040274 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.040282 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.040289 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.040297 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.040305 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.040312 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.040320 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.040328 | orchestrator | 2026-04-09 00:25:20.040335 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-09 00:25:20.040344 | orchestrator | Thursday 09 April 2026 00:25:10 +0000 (0:00:00.479) 0:00:14.934 ******** 2026-04-09 00:25:20.040354 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:20.040363 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:20.040372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:20.040382 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:20.040391 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:20.040401 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:20.040410 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:20.040420 | orchestrator | 2026-04-09 00:25:20.040430 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-09 00:25:20.040439 | orchestrator | Thursday 09 April 2026 00:25:10 +0000 (0:00:00.265) 0:00:15.200 ******** 2026-04-09 00:25:20.040447 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:20.040455 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:20.040463 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.040470 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:20.040478 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:20.040486 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:20.040493 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:20.040506 | orchestrator | 2026-04-09 00:25:20.040519 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-09 00:25:20.040534 | orchestrator | Thursday 09 April 2026 00:25:11 +0000 (0:00:00.540) 0:00:15.740 ******** 2026-04-09 00:25:20.040553 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.040565 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:20.040577 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:20.040600 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:20.040613 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:20.040626 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:20.040638 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:20.040651 | orchestrator | 2026-04-09 00:25:20.040664 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-09 00:25:20.040679 | orchestrator | Thursday 09 April 2026 00:25:12 +0000 (0:00:00.982) 0:00:16.722 ******** 2026-04-09 00:25:20.040692 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.040705 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.040717 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.040730 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.040743 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.040755 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.040767 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.040780 | orchestrator | 2026-04-09 00:25:20.040792 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-09 00:25:20.040803 | orchestrator | Thursday 09 April 2026 00:25:14 +0000 (0:00:01.913) 0:00:18.636 ******** 2026-04-09 00:25:20.040836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:20.040850 | orchestrator | 2026-04-09 00:25:20.040878 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-09 00:25:20.040890 | orchestrator | Thursday 09 April 2026 00:25:14 +0000 (0:00:00.282) 0:00:18.918 ******** 2026-04-09 00:25:20.040903 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:20.040915 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:25:20.040927 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:20.040940 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:20.040953 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:25:20.040965 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:25:20.040978 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:20.041045 | orchestrator | 2026-04-09 00:25:20.041064 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-09 00:25:20.041077 | orchestrator | Thursday 09 April 2026 00:25:15 +0000 (0:00:01.191) 0:00:20.109 ******** 2026-04-09 00:25:20.041090 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041104 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.041117 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.041130 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.041141 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041149 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041157 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041165 | orchestrator | 2026-04-09 00:25:20.041173 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-09 00:25:20.041181 | orchestrator | Thursday 09 April 2026 00:25:15 +0000 (0:00:00.238) 0:00:20.347 ******** 2026-04-09 00:25:20.041188 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041196 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.041204 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.041212 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.041219 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041227 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041235 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041242 | orchestrator | 2026-04-09 00:25:20.041251 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-09 00:25:20.041258 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.248) 0:00:20.596 ******** 2026-04-09 00:25:20.041266 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041274 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.041282 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.041289 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.041297 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041314 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041322 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041330 | orchestrator | 2026-04-09 00:25:20.041337 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-09 00:25:20.041345 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.200) 0:00:20.796 ******** 2026-04-09 00:25:20.041354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:25:20.041365 | orchestrator | 2026-04-09 00:25:20.041373 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-09 00:25:20.041381 | orchestrator | Thursday 09 April 2026 00:25:16 +0000 (0:00:00.278) 0:00:21.075 ******** 2026-04-09 00:25:20.041388 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041396 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.041404 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.041412 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041419 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.041427 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041435 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041442 | orchestrator | 2026-04-09 00:25:20.041450 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-09 00:25:20.041458 | orchestrator | Thursday 09 April 2026 00:25:17 +0000 (0:00:00.599) 0:00:21.674 ******** 2026-04-09 00:25:20.041466 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:25:20.041474 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:25:20.041485 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:25:20.041498 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:25:20.041511 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:25:20.041523 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:25:20.041536 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:25:20.041549 | orchestrator | 2026-04-09 00:25:20.041563 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-09 00:25:20.041576 | orchestrator | Thursday 09 April 2026 00:25:17 +0000 (0:00:00.206) 0:00:21.881 ******** 2026-04-09 00:25:20.041591 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041600 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041607 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:20.041615 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:25:20.041623 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:25:20.041631 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041638 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041646 | orchestrator | 2026-04-09 00:25:20.041654 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-09 00:25:20.041662 | orchestrator | Thursday 09 April 2026 00:25:18 +0000 (0:00:01.034) 0:00:22.916 ******** 2026-04-09 00:25:20.041670 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041677 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:25:20.041685 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:25:20.041693 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:25:20.041700 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041708 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041716 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:25:20.041724 | orchestrator | 2026-04-09 00:25:20.041732 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-09 00:25:20.041739 | orchestrator | Thursday 09 April 2026 00:25:19 +0000 (0:00:00.604) 0:00:23.520 ******** 2026-04-09 00:25:20.041747 | orchestrator | ok: [testbed-manager] 2026-04-09 00:25:20.041755 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:25:20.041763 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:25:20.041778 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:25:20.041797 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222250 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.222349 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.222356 | orchestrator | 2026-04-09 00:26:00.222362 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-09 00:26:00.222368 | orchestrator | Thursday 09 April 2026 00:25:20 +0000 (0:00:01.034) 0:00:24.555 ******** 2026-04-09 00:26:00.222373 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222378 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222382 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222387 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:00.222391 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.222396 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:00.222401 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.222405 | orchestrator | 2026-04-09 00:26:00.222409 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-09 00:26:00.222414 | orchestrator | Thursday 09 April 2026 00:25:36 +0000 (0:00:16.677) 0:00:41.232 ******** 2026-04-09 00:26:00.222419 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222423 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222427 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222432 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222436 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222440 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222445 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222449 | orchestrator | 2026-04-09 00:26:00.222453 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-09 00:26:00.222458 | orchestrator | Thursday 09 April 2026 00:25:36 +0000 (0:00:00.204) 0:00:41.437 ******** 2026-04-09 00:26:00.222462 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222466 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222471 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222475 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222479 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222484 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222488 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222492 | orchestrator | 2026-04-09 00:26:00.222497 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-09 00:26:00.222502 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.206) 0:00:41.643 ******** 2026-04-09 00:26:00.222506 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222510 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222515 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222519 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222524 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222528 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222532 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222537 | orchestrator | 2026-04-09 00:26:00.222541 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-09 00:26:00.222545 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.205) 0:00:41.849 ******** 2026-04-09 00:26:00.222551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:00.222558 | orchestrator | 2026-04-09 00:26:00.222562 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-09 00:26:00.222567 | orchestrator | Thursday 09 April 2026 00:25:37 +0000 (0:00:00.284) 0:00:42.134 ******** 2026-04-09 00:26:00.222571 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222575 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222580 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222584 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222588 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222593 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222597 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222601 | orchestrator | 2026-04-09 00:26:00.222606 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-09 00:26:00.222614 | orchestrator | Thursday 09 April 2026 00:25:39 +0000 (0:00:01.635) 0:00:43.769 ******** 2026-04-09 00:26:00.222619 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:00.222624 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:00.222628 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:00.222632 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.222637 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.222641 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:00.222645 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:00.222650 | orchestrator | 2026-04-09 00:26:00.222654 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-09 00:26:00.222658 | orchestrator | Thursday 09 April 2026 00:25:40 +0000 (0:00:01.065) 0:00:44.834 ******** 2026-04-09 00:26:00.222663 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222667 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222671 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222676 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222680 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222684 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222689 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222693 | orchestrator | 2026-04-09 00:26:00.222697 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-09 00:26:00.222702 | orchestrator | Thursday 09 April 2026 00:25:42 +0000 (0:00:01.683) 0:00:46.518 ******** 2026-04-09 00:26:00.222707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:00.222713 | orchestrator | 2026-04-09 00:26:00.222717 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-09 00:26:00.222722 | orchestrator | Thursday 09 April 2026 00:25:42 +0000 (0:00:00.216) 0:00:46.734 ******** 2026-04-09 00:26:00.222727 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:00.222731 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:00.222735 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:00.222740 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.222744 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.222748 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:00.222762 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:00.222767 | orchestrator | 2026-04-09 00:26:00.222782 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-09 00:26:00.222787 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:01.035) 0:00:47.770 ******** 2026-04-09 00:26:00.222791 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:26:00.222795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:26:00.222799 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:26:00.222804 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:26:00.222808 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:26:00.222812 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:26:00.222817 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:26:00.222822 | orchestrator | 2026-04-09 00:26:00.222828 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-09 00:26:00.222833 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.176) 0:00:47.946 ******** 2026-04-09 00:26:00.222838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:00.222843 | orchestrator | 2026-04-09 00:26:00.222848 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-09 00:26:00.222853 | orchestrator | Thursday 09 April 2026 00:25:43 +0000 (0:00:00.233) 0:00:48.179 ******** 2026-04-09 00:26:00.222858 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.222863 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.222872 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.222877 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.222883 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.222888 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.222892 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.222899 | orchestrator | 2026-04-09 00:26:00.222907 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-09 00:26:00.222914 | orchestrator | Thursday 09 April 2026 00:25:45 +0000 (0:00:01.654) 0:00:49.834 ******** 2026-04-09 00:26:00.222923 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:00.222933 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:00.222942 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:00.222949 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.222957 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.222964 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:00.222971 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:00.223017 | orchestrator | 2026-04-09 00:26:00.223025 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-09 00:26:00.223033 | orchestrator | Thursday 09 April 2026 00:25:46 +0000 (0:00:01.253) 0:00:51.088 ******** 2026-04-09 00:26:00.223040 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:26:00.223046 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:26:00.223053 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:26:00.223059 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:26:00.223067 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:26:00.223074 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:26:00.223080 | orchestrator | changed: [testbed-manager] 2026-04-09 00:26:00.223088 | orchestrator | 2026-04-09 00:26:00.223095 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-09 00:26:00.223102 | orchestrator | Thursday 09 April 2026 00:25:57 +0000 (0:00:11.107) 0:01:02.196 ******** 2026-04-09 00:26:00.223110 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.223117 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.223125 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.223132 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.223137 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.223144 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.223151 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.223156 | orchestrator | 2026-04-09 00:26:00.223161 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-09 00:26:00.223167 | orchestrator | Thursday 09 April 2026 00:25:58 +0000 (0:00:00.926) 0:01:03.123 ******** 2026-04-09 00:26:00.223172 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.223177 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.223182 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.223186 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.223190 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.223194 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.223198 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.223203 | orchestrator | 2026-04-09 00:26:00.223207 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-09 00:26:00.223212 | orchestrator | Thursday 09 April 2026 00:25:59 +0000 (0:00:00.877) 0:01:04.000 ******** 2026-04-09 00:26:00.223216 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.223220 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.223224 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.223228 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.223233 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.223237 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.223241 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.223245 | orchestrator | 2026-04-09 00:26:00.223250 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-09 00:26:00.223254 | orchestrator | Thursday 09 April 2026 00:25:59 +0000 (0:00:00.208) 0:01:04.209 ******** 2026-04-09 00:26:00.223259 | orchestrator | ok: [testbed-manager] 2026-04-09 00:26:00.223268 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:26:00.223272 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:26:00.223277 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:26:00.223281 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:26:00.223285 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:26:00.223289 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:26:00.223294 | orchestrator | 2026-04-09 00:26:00.223298 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-09 00:26:00.223302 | orchestrator | Thursday 09 April 2026 00:25:59 +0000 (0:00:00.206) 0:01:04.415 ******** 2026-04-09 00:26:00.223307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:26:00.223312 | orchestrator | 2026-04-09 00:26:00.223322 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-09 00:28:13.138559 | orchestrator | Thursday 09 April 2026 00:26:00 +0000 (0:00:00.264) 0:01:04.680 ******** 2026-04-09 00:28:13.138660 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.138670 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.138675 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.138679 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.138683 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.138687 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.138691 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.138695 | orchestrator | 2026-04-09 00:28:13.138699 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-09 00:28:13.138704 | orchestrator | Thursday 09 April 2026 00:26:02 +0000 (0:00:01.798) 0:01:06.479 ******** 2026-04-09 00:28:13.138708 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:13.138713 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:13.138717 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:13.138721 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:13.138725 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:13.138728 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:13.138734 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:13.138739 | orchestrator | 2026-04-09 00:28:13.138745 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-09 00:28:13.138768 | orchestrator | Thursday 09 April 2026 00:26:02 +0000 (0:00:00.616) 0:01:07.095 ******** 2026-04-09 00:28:13.138774 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.138781 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.138787 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.138792 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.138796 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.138800 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.138804 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.138808 | orchestrator | 2026-04-09 00:28:13.138812 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-09 00:28:13.138816 | orchestrator | Thursday 09 April 2026 00:26:02 +0000 (0:00:00.245) 0:01:07.340 ******** 2026-04-09 00:28:13.138819 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.138823 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.138827 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.138831 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.138834 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.138838 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.138842 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.138845 | orchestrator | 2026-04-09 00:28:13.138849 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-09 00:28:13.138853 | orchestrator | Thursday 09 April 2026 00:26:04 +0000 (0:00:01.290) 0:01:08.630 ******** 2026-04-09 00:28:13.138857 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:13.138863 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:13.138867 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:13.138884 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:13.138888 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:13.138894 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:13.138900 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:13.138905 | orchestrator | 2026-04-09 00:28:13.138910 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-09 00:28:13.138916 | orchestrator | Thursday 09 April 2026 00:26:06 +0000 (0:00:01.906) 0:01:10.537 ******** 2026-04-09 00:28:13.138922 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.138928 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.138933 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.138959 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.138965 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.138970 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.138975 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.138981 | orchestrator | 2026-04-09 00:28:13.138986 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-09 00:28:13.138991 | orchestrator | Thursday 09 April 2026 00:26:08 +0000 (0:00:02.408) 0:01:12.946 ******** 2026-04-09 00:28:13.138996 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.139002 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.139008 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.139014 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.139019 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.139025 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.139030 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.139036 | orchestrator | 2026-04-09 00:28:13.139041 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-09 00:28:13.139047 | orchestrator | Thursday 09 April 2026 00:26:46 +0000 (0:00:37.582) 0:01:50.528 ******** 2026-04-09 00:28:13.139053 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:13.139059 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:13.139065 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:13.139071 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:13.139076 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:13.139083 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:13.139089 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:13.139095 | orchestrator | 2026-04-09 00:28:13.139101 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-09 00:28:13.139108 | orchestrator | Thursday 09 April 2026 00:27:57 +0000 (0:01:11.502) 0:03:02.030 ******** 2026-04-09 00:28:13.139113 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:13.139116 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.139120 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.139124 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.139129 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.139133 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.139137 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.139141 | orchestrator | 2026-04-09 00:28:13.139146 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-09 00:28:13.139151 | orchestrator | Thursday 09 April 2026 00:27:59 +0000 (0:00:01.820) 0:03:03.851 ******** 2026-04-09 00:28:13.139155 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:13.139159 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:13.139163 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:13.139168 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:13.139172 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:13.139176 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:13.139181 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:13.139185 | orchestrator | 2026-04-09 00:28:13.139189 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-09 00:28:13.139244 | orchestrator | Thursday 09 April 2026 00:28:11 +0000 (0:00:12.438) 0:03:16.289 ******** 2026-04-09 00:28:13.139270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-09 00:28:13.139288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-09 00:28:13.139294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-09 00:28:13.139302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:28:13.139307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-09 00:28:13.139312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-09 00:28:13.139316 | orchestrator | 2026-04-09 00:28:13.139321 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-09 00:28:13.139325 | orchestrator | Thursday 09 April 2026 00:28:12 +0000 (0:00:00.453) 0:03:16.743 ******** 2026-04-09 00:28:13.139330 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:28:13.139334 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:13.139339 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:28:13.139344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:28:13.139348 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:13.139352 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:13.139356 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-09 00:28:13.139361 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:13.139365 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:28:13.139369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:28:13.139373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:28:13.139378 | orchestrator | 2026-04-09 00:28:13.139382 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-09 00:28:13.139387 | orchestrator | Thursday 09 April 2026 00:28:13 +0000 (0:00:00.784) 0:03:17.527 ******** 2026-04-09 00:28:13.139395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:28:13.139401 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:28:13.139405 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:28:13.139410 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:28:13.139417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:28:13.139424 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:28:20.100514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:28:20.100598 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:28:20.100607 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:28:20.100614 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:28:20.100621 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:20.100628 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:28:20.100634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:28:20.100639 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:28:20.100644 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:28:20.100650 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:28:20.100655 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:28:20.100661 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:28:20.100666 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:28:20.100671 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:28:20.100677 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:28:20.100682 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:28:20.100688 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:28:20.100693 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:28:20.100698 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:28:20.100704 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:28:20.100709 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:28:20.100715 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:28:20.100720 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:28:20.100726 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:28:20.100731 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:28:20.100736 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:20.100742 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:20.100747 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-09 00:28:20.100770 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-09 00:28:20.100776 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-09 00:28:20.100782 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-09 00:28:20.100787 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-09 00:28:20.100792 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-09 00:28:20.100798 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-09 00:28:20.100803 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-09 00:28:20.100809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-09 00:28:20.100814 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-09 00:28:20.100819 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:20.100825 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:28:20.100830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:28:20.100835 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-09 00:28:20.100841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:28:20.100846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:28:20.100864 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:28:20.100870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:28:20.100876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:28:20.100881 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:28:20.100886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-09 00:28:20.100892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:28:20.100897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-09 00:28:20.100902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:28:20.100908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-09 00:28:20.100913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:28:20.100918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:28:20.100924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:28:20.100929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:28:20.101012 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:28:20.101020 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:28:20.101025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:28:20.101031 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-09 00:28:20.101042 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:28:20.101047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-09 00:28:20.101053 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:28:20.101058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-09 00:28:20.101063 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:28:20.101069 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-09 00:28:20.101075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-09 00:28:20.101082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-09 00:28:20.101088 | orchestrator | 2026-04-09 00:28:20.101104 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-09 00:28:20.101111 | orchestrator | Thursday 09 April 2026 00:28:18 +0000 (0:00:05.839) 0:03:23.367 ******** 2026-04-09 00:28:20.101125 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101155 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101174 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-09 00:28:20.101181 | orchestrator | 2026-04-09 00:28:20.101187 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-09 00:28:20.101193 | orchestrator | Thursday 09 April 2026 00:28:19 +0000 (0:00:00.569) 0:03:23.936 ******** 2026-04-09 00:28:20.101199 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:20.101206 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:20.101212 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:20.101218 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:28:20.101224 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:20.101231 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:28:20.101237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:20.101243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:28:20.101249 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:20.101258 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:20.101270 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:34.055894 | orchestrator | 2026-04-09 00:28:34.056051 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-09 00:28:34.056069 | orchestrator | Thursday 09 April 2026 00:28:20 +0000 (0:00:00.653) 0:03:24.589 ******** 2026-04-09 00:28:34.056081 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:34.056093 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:34.056104 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:34.056137 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:34.056149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:34.056158 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:34.056169 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-09 00:28:34.056180 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:34.056190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:34.056201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:34.056211 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-09 00:28:34.056221 | orchestrator | 2026-04-09 00:28:34.056231 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-09 00:28:34.056238 | orchestrator | Thursday 09 April 2026 00:28:21 +0000 (0:00:01.544) 0:03:26.134 ******** 2026-04-09 00:28:34.056244 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:28:34.056249 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:34.056255 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:28:34.056261 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:28:34.056267 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:28:34.056273 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:28:34.056279 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-09 00:28:34.056284 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:28:34.056290 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:28:34.056296 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:28:34.056301 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-09 00:28:34.056307 | orchestrator | 2026-04-09 00:28:34.056313 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-09 00:28:34.056318 | orchestrator | Thursday 09 April 2026 00:28:22 +0000 (0:00:00.713) 0:03:26.847 ******** 2026-04-09 00:28:34.056324 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:34.056330 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:28:34.056336 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:28:34.056341 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:28:34.056347 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:34.056353 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:34.056358 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:34.056364 | orchestrator | 2026-04-09 00:28:34.056370 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-09 00:28:34.056375 | orchestrator | Thursday 09 April 2026 00:28:22 +0000 (0:00:00.342) 0:03:27.190 ******** 2026-04-09 00:28:34.056381 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:34.056388 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:34.056406 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:34.056412 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:34.056425 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:34.056431 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:34.056438 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:34.056445 | orchestrator | 2026-04-09 00:28:34.056452 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-09 00:28:34.056458 | orchestrator | Thursday 09 April 2026 00:28:28 +0000 (0:00:05.594) 0:03:32.784 ******** 2026-04-09 00:28:34.056465 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-09 00:28:34.056480 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-09 00:28:34.056487 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:34.056494 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-09 00:28:34.056501 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:28:34.056507 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:28:34.056514 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-09 00:28:34.056521 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-09 00:28:34.056528 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:28:34.056535 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:34.056541 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-09 00:28:34.056548 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:34.056555 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-09 00:28:34.056562 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:34.056569 | orchestrator | 2026-04-09 00:28:34.056576 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-09 00:28:34.056595 | orchestrator | Thursday 09 April 2026 00:28:28 +0000 (0:00:00.331) 0:03:33.116 ******** 2026-04-09 00:28:34.056603 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-09 00:28:34.056610 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-09 00:28:34.056617 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-09 00:28:34.056639 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-09 00:28:34.056646 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-09 00:28:34.056653 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-09 00:28:34.056659 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-09 00:28:34.056666 | orchestrator | 2026-04-09 00:28:34.056673 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-09 00:28:34.056680 | orchestrator | Thursday 09 April 2026 00:28:29 +0000 (0:00:01.149) 0:03:34.265 ******** 2026-04-09 00:28:34.056689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:28:34.056697 | orchestrator | 2026-04-09 00:28:34.056703 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-09 00:28:34.056709 | orchestrator | Thursday 09 April 2026 00:28:30 +0000 (0:00:00.401) 0:03:34.667 ******** 2026-04-09 00:28:34.056715 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:34.056720 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:34.056726 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:34.056732 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:34.056738 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:34.056743 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:34.056749 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:34.056755 | orchestrator | 2026-04-09 00:28:34.056761 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-09 00:28:34.056766 | orchestrator | Thursday 09 April 2026 00:28:31 +0000 (0:00:01.376) 0:03:36.044 ******** 2026-04-09 00:28:34.056772 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:34.056778 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:34.056784 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:34.056789 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:34.056795 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:34.056800 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:34.056806 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:34.056812 | orchestrator | 2026-04-09 00:28:34.056818 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-09 00:28:34.056823 | orchestrator | Thursday 09 April 2026 00:28:32 +0000 (0:00:00.665) 0:03:36.709 ******** 2026-04-09 00:28:34.056829 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:34.056835 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:34.056841 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:34.056851 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:34.056857 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:34.056863 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:34.056869 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:34.056874 | orchestrator | 2026-04-09 00:28:34.056880 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-09 00:28:34.056886 | orchestrator | Thursday 09 April 2026 00:28:32 +0000 (0:00:00.629) 0:03:37.338 ******** 2026-04-09 00:28:34.056892 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:34.056897 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:34.056903 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:34.056909 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:34.056915 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:34.056920 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:34.056926 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:34.056952 | orchestrator | 2026-04-09 00:28:34.056960 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-09 00:28:34.056966 | orchestrator | Thursday 09 April 2026 00:28:33 +0000 (0:00:00.617) 0:03:37.956 ******** 2026-04-09 00:28:34.056976 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693128.7527444, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:34.056985 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693115.6603928, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:34.056995 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693150.5619411, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:34.057016 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693156.8525238, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.595777 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693175.5575044, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.595922 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693158.8116462, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596005 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775693159.585493, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596020 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596035 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596050 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596066 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596104 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596132 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596142 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-09 00:28:39.596152 | orchestrator | 2026-04-09 00:28:39.596164 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-09 00:28:39.596174 | orchestrator | Thursday 09 April 2026 00:28:34 +0000 (0:00:01.003) 0:03:38.960 ******** 2026-04-09 00:28:39.596183 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:39.596193 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:39.596201 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:39.596210 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:39.596218 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:39.596227 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:39.596235 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:39.596243 | orchestrator | 2026-04-09 00:28:39.596252 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-09 00:28:39.596261 | orchestrator | Thursday 09 April 2026 00:28:35 +0000 (0:00:01.133) 0:03:40.093 ******** 2026-04-09 00:28:39.596269 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:39.596278 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:39.596286 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:39.596295 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:39.596303 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:39.596314 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:39.596323 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:39.596333 | orchestrator | 2026-04-09 00:28:39.596343 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-09 00:28:39.596371 | orchestrator | Thursday 09 April 2026 00:28:36 +0000 (0:00:01.168) 0:03:41.262 ******** 2026-04-09 00:28:39.596382 | orchestrator | changed: [testbed-manager] 2026-04-09 00:28:39.596392 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:28:39.596401 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:28:39.596411 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:28:39.596422 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:28:39.596432 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:28:39.596441 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:28:39.596451 | orchestrator | 2026-04-09 00:28:39.596461 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-09 00:28:39.596471 | orchestrator | Thursday 09 April 2026 00:28:38 +0000 (0:00:01.257) 0:03:42.520 ******** 2026-04-09 00:28:39.596481 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:28:39.596491 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:28:39.596500 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:28:39.596510 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:28:39.596521 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:28:39.596531 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:28:39.596540 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:28:39.596558 | orchestrator | 2026-04-09 00:28:39.596568 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-09 00:28:39.596579 | orchestrator | Thursday 09 April 2026 00:28:38 +0000 (0:00:00.269) 0:03:42.789 ******** 2026-04-09 00:28:39.596589 | orchestrator | ok: [testbed-manager] 2026-04-09 00:28:39.596600 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:28:39.596615 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:28:39.596625 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:28:39.596635 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:28:39.596645 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:28:39.596654 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:28:39.596664 | orchestrator | 2026-04-09 00:28:39.596675 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-09 00:28:39.596686 | orchestrator | Thursday 09 April 2026 00:28:39 +0000 (0:00:00.864) 0:03:43.653 ******** 2026-04-09 00:28:39.596699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:28:39.596710 | orchestrator | 2026-04-09 00:28:39.596719 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-09 00:28:39.596735 | orchestrator | Thursday 09 April 2026 00:28:39 +0000 (0:00:00.403) 0:03:44.056 ******** 2026-04-09 00:29:54.096134 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096249 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:54.096267 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:54.096278 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:54.096289 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:54.096300 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:54.096311 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:54.096323 | orchestrator | 2026-04-09 00:29:54.096335 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-09 00:29:54.096347 | orchestrator | Thursday 09 April 2026 00:28:48 +0000 (0:00:09.126) 0:03:53.183 ******** 2026-04-09 00:29:54.096358 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.096369 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096380 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.096390 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.096401 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.096412 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.096423 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.096434 | orchestrator | 2026-04-09 00:29:54.096445 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-09 00:29:54.096456 | orchestrator | Thursday 09 April 2026 00:28:49 +0000 (0:00:01.074) 0:03:54.257 ******** 2026-04-09 00:29:54.096467 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096477 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.096488 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.096498 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.096509 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.096519 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.096530 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.096542 | orchestrator | 2026-04-09 00:29:54.096553 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-09 00:29:54.096567 | orchestrator | Thursday 09 April 2026 00:28:50 +0000 (0:00:00.940) 0:03:55.198 ******** 2026-04-09 00:29:54.096579 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096593 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.096606 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.096619 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.096631 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.096645 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.096658 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.096670 | orchestrator | 2026-04-09 00:29:54.096684 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-09 00:29:54.096721 | orchestrator | Thursday 09 April 2026 00:28:51 +0000 (0:00:00.298) 0:03:55.496 ******** 2026-04-09 00:29:54.096733 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096746 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.096758 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.096771 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.096784 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.096797 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.096810 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.096822 | orchestrator | 2026-04-09 00:29:54.096837 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-09 00:29:54.096850 | orchestrator | Thursday 09 April 2026 00:28:51 +0000 (0:00:00.272) 0:03:55.768 ******** 2026-04-09 00:29:54.096862 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.096876 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.096888 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.096901 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.097096 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.097111 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.097122 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.097133 | orchestrator | 2026-04-09 00:29:54.097144 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-09 00:29:54.097156 | orchestrator | Thursday 09 April 2026 00:28:51 +0000 (0:00:00.295) 0:03:56.063 ******** 2026-04-09 00:29:54.097167 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.097177 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.097188 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.097198 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.097209 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.097219 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.097230 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.097241 | orchestrator | 2026-04-09 00:29:54.097251 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-09 00:29:54.097263 | orchestrator | Thursday 09 April 2026 00:28:57 +0000 (0:00:05.746) 0:04:01.809 ******** 2026-04-09 00:29:54.097276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:29:54.097290 | orchestrator | 2026-04-09 00:29:54.097301 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-09 00:29:54.097312 | orchestrator | Thursday 09 April 2026 00:28:57 +0000 (0:00:00.461) 0:04:02.271 ******** 2026-04-09 00:29:54.097323 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097334 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-09 00:29:54.097361 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097373 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-09 00:29:54.097384 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:29:54.097395 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097406 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:29:54.097425 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-09 00:29:54.097443 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097462 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-09 00:29:54.097482 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:29:54.097501 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097520 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:29:54.097531 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-09 00:29:54.097542 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097553 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:29:54.097586 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-09 00:29:54.097610 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:29:54.097621 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-09 00:29:54.097657 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-09 00:29:54.097675 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:29:54.097694 | orchestrator | 2026-04-09 00:29:54.097713 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-09 00:29:54.097731 | orchestrator | Thursday 09 April 2026 00:28:58 +0000 (0:00:00.332) 0:04:02.604 ******** 2026-04-09 00:29:54.097751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:29:54.097770 | orchestrator | 2026-04-09 00:29:54.097788 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-09 00:29:54.097806 | orchestrator | Thursday 09 April 2026 00:28:58 +0000 (0:00:00.517) 0:04:03.121 ******** 2026-04-09 00:29:54.097825 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-09 00:29:54.097844 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:29:54.097862 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-09 00:29:54.097879 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-09 00:29:54.097897 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:29:54.097949 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-09 00:29:54.097969 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:29:54.097988 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-09 00:29:54.098007 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:29:54.098122 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-09 00:29:54.098187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:29:54.098208 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:29:54.098226 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-09 00:29:54.098246 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:29:54.098264 | orchestrator | 2026-04-09 00:29:54.098284 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-09 00:29:54.098301 | orchestrator | Thursday 09 April 2026 00:28:58 +0000 (0:00:00.280) 0:04:03.401 ******** 2026-04-09 00:29:54.098319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:29:54.098331 | orchestrator | 2026-04-09 00:29:54.098342 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-09 00:29:54.098353 | orchestrator | Thursday 09 April 2026 00:28:59 +0000 (0:00:00.391) 0:04:03.793 ******** 2026-04-09 00:29:54.098380 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:54.098400 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:54.098412 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:54.098422 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:54.098433 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:54.098444 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:54.098454 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:54.098465 | orchestrator | 2026-04-09 00:29:54.098476 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-09 00:29:54.098487 | orchestrator | Thursday 09 April 2026 00:29:30 +0000 (0:00:30.913) 0:04:34.706 ******** 2026-04-09 00:29:54.098498 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:54.098508 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:54.098519 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:54.098530 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:54.098540 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:54.098561 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:54.098572 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:54.098583 | orchestrator | 2026-04-09 00:29:54.098594 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-09 00:29:54.098605 | orchestrator | Thursday 09 April 2026 00:29:38 +0000 (0:00:08.133) 0:04:42.839 ******** 2026-04-09 00:29:54.098616 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:54.098626 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:54.098637 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:54.098647 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:54.098658 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:54.098669 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:54.098679 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:54.098690 | orchestrator | 2026-04-09 00:29:54.098701 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-09 00:29:54.098720 | orchestrator | Thursday 09 April 2026 00:29:46 +0000 (0:00:07.769) 0:04:50.609 ******** 2026-04-09 00:29:54.098731 | orchestrator | ok: [testbed-manager] 2026-04-09 00:29:54.098742 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:29:54.098753 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:29:54.098764 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:29:54.098775 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:29:54.098786 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:29:54.098796 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:29:54.098807 | orchestrator | 2026-04-09 00:29:54.098818 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-09 00:29:54.098829 | orchestrator | Thursday 09 April 2026 00:29:47 +0000 (0:00:01.758) 0:04:52.367 ******** 2026-04-09 00:29:54.098840 | orchestrator | changed: [testbed-manager] 2026-04-09 00:29:54.098851 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:29:54.098862 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:29:54.098873 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:29:54.098891 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:29:54.098938 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:29:54.098957 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:29:54.098977 | orchestrator | 2026-04-09 00:29:54.099014 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-09 00:30:05.367598 | orchestrator | Thursday 09 April 2026 00:29:54 +0000 (0:00:06.186) 0:04:58.553 ******** 2026-04-09 00:30:05.367751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:05.367782 | orchestrator | 2026-04-09 00:30:05.367804 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-09 00:30:05.367822 | orchestrator | Thursday 09 April 2026 00:29:54 +0000 (0:00:00.344) 0:04:58.897 ******** 2026-04-09 00:30:05.367839 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:05.367858 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:05.367877 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:05.367956 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:05.367976 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:05.367996 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:05.368016 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:05.368035 | orchestrator | 2026-04-09 00:30:05.368055 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-09 00:30:05.368078 | orchestrator | Thursday 09 April 2026 00:29:55 +0000 (0:00:00.670) 0:04:59.568 ******** 2026-04-09 00:30:05.368100 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:05.368121 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:05.368141 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:05.368160 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:05.368180 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:05.368201 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:05.368254 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:05.368270 | orchestrator | 2026-04-09 00:30:05.368285 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-09 00:30:05.368300 | orchestrator | Thursday 09 April 2026 00:29:56 +0000 (0:00:01.876) 0:05:01.444 ******** 2026-04-09 00:30:05.368313 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:30:05.368326 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:30:05.368339 | orchestrator | changed: [testbed-manager] 2026-04-09 00:30:05.368352 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:30:05.368365 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:30:05.368377 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:30:05.368390 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:30:05.368403 | orchestrator | 2026-04-09 00:30:05.368416 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-09 00:30:05.368429 | orchestrator | Thursday 09 April 2026 00:29:57 +0000 (0:00:00.732) 0:05:02.177 ******** 2026-04-09 00:30:05.368442 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.368455 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.368468 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.368481 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:05.368493 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:05.368505 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:05.368518 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:05.368530 | orchestrator | 2026-04-09 00:30:05.368543 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-09 00:30:05.368556 | orchestrator | Thursday 09 April 2026 00:29:57 +0000 (0:00:00.281) 0:05:02.459 ******** 2026-04-09 00:30:05.368568 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.368581 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.368595 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.368607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:05.368619 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:05.368632 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:05.368645 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:05.368657 | orchestrator | 2026-04-09 00:30:05.368670 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-09 00:30:05.368683 | orchestrator | Thursday 09 April 2026 00:29:58 +0000 (0:00:00.372) 0:05:02.832 ******** 2026-04-09 00:30:05.368696 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:05.368709 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:05.368722 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:05.368735 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:05.368748 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:05.368761 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:05.368774 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:05.368787 | orchestrator | 2026-04-09 00:30:05.368800 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-09 00:30:05.368813 | orchestrator | Thursday 09 April 2026 00:29:58 +0000 (0:00:00.452) 0:05:03.285 ******** 2026-04-09 00:30:05.368826 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.368839 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.368852 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.368865 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:05.368878 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:05.368922 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:05.368943 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:05.368965 | orchestrator | 2026-04-09 00:30:05.368985 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-09 00:30:05.369025 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.268) 0:05:03.553 ******** 2026-04-09 00:30:05.369040 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:05.369053 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:05.369066 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:05.369079 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:05.369102 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:05.369115 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:05.369128 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:05.369141 | orchestrator | 2026-04-09 00:30:05.369155 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-09 00:30:05.369168 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.297) 0:05:03.851 ******** 2026-04-09 00:30:05.369181 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:30:05.369194 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369207 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:30:05.369220 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369233 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:30:05.369246 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369258 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:30:05.369270 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369308 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:30:05.369322 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369335 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:30:05.369348 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369360 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:30:05.369373 | orchestrator |  docker_version: 5:27.5.1 2026-04-09 00:30:05.369385 | orchestrator | 2026-04-09 00:30:05.369399 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-09 00:30:05.369412 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.258) 0:05:04.109 ******** 2026-04-09 00:30:05.369425 | orchestrator | ok: [testbed-manager] =>  2026-04-09 00:30:05.369437 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369449 | orchestrator | ok: [testbed-node-0] =>  2026-04-09 00:30:05.369462 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369474 | orchestrator | ok: [testbed-node-1] =>  2026-04-09 00:30:05.369487 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369500 | orchestrator | ok: [testbed-node-2] =>  2026-04-09 00:30:05.369513 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369525 | orchestrator | ok: [testbed-node-3] =>  2026-04-09 00:30:05.369538 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369551 | orchestrator | ok: [testbed-node-4] =>  2026-04-09 00:30:05.369563 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369576 | orchestrator | ok: [testbed-node-5] =>  2026-04-09 00:30:05.369589 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-09 00:30:05.369601 | orchestrator | 2026-04-09 00:30:05.369614 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-09 00:30:05.369627 | orchestrator | Thursday 09 April 2026 00:29:59 +0000 (0:00:00.268) 0:05:04.378 ******** 2026-04-09 00:30:05.369641 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.369653 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.369666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.369678 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:05.369691 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:05.369704 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:05.369718 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:05.369732 | orchestrator | 2026-04-09 00:30:05.369745 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-09 00:30:05.369757 | orchestrator | Thursday 09 April 2026 00:30:00 +0000 (0:00:00.256) 0:05:04.634 ******** 2026-04-09 00:30:05.369770 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.369783 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.369796 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.369808 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:30:05.369821 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:30:05.369833 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:30:05.369846 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:30:05.369859 | orchestrator | 2026-04-09 00:30:05.369872 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-09 00:30:05.369907 | orchestrator | Thursday 09 April 2026 00:30:00 +0000 (0:00:00.273) 0:05:04.908 ******** 2026-04-09 00:30:05.369923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:30:05.369939 | orchestrator | 2026-04-09 00:30:05.369952 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-09 00:30:05.369966 | orchestrator | Thursday 09 April 2026 00:30:00 +0000 (0:00:00.433) 0:05:05.342 ******** 2026-04-09 00:30:05.369978 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:05.369991 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:05.370004 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:05.370080 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:05.370096 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:05.370109 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:05.370121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:05.370134 | orchestrator | 2026-04-09 00:30:05.370147 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-09 00:30:05.370160 | orchestrator | Thursday 09 April 2026 00:30:01 +0000 (0:00:00.805) 0:05:06.147 ******** 2026-04-09 00:30:05.370173 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:30:05.370185 | orchestrator | ok: [testbed-manager] 2026-04-09 00:30:05.370198 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:30:05.370210 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:30:05.370223 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:30:05.370236 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:30:05.370248 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:30:05.370261 | orchestrator | 2026-04-09 00:30:05.370274 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-09 00:30:05.370289 | orchestrator | Thursday 09 April 2026 00:30:04 +0000 (0:00:03.305) 0:05:09.453 ******** 2026-04-09 00:30:05.370302 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-09 00:30:05.370315 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-09 00:30:05.370328 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-09 00:30:05.370341 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-09 00:30:05.370354 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-09 00:30:05.370366 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-09 00:30:05.370379 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:30:05.370392 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-09 00:30:05.370405 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-09 00:30:05.370417 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-09 00:30:05.370430 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:30:05.370442 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-09 00:30:05.370455 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-09 00:30:05.370468 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-09 00:30:05.370481 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:30:05.370495 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-09 00:30:05.370518 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-09 00:31:08.117463 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-09 00:31:08.117580 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:08.117598 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-09 00:31:08.117611 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-09 00:31:08.117674 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-09 00:31:08.117686 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:08.117697 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:08.117730 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-09 00:31:08.117741 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-09 00:31:08.117752 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-09 00:31:08.117763 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:08.117774 | orchestrator | 2026-04-09 00:31:08.117788 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-09 00:31:08.117801 | orchestrator | Thursday 09 April 2026 00:30:05 +0000 (0:00:00.594) 0:05:10.047 ******** 2026-04-09 00:31:08.117812 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.117823 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.117924 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.117935 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.117946 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.117956 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.117967 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.117978 | orchestrator | 2026-04-09 00:31:08.117990 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-09 00:31:08.118004 | orchestrator | Thursday 09 April 2026 00:30:12 +0000 (0:00:07.048) 0:05:17.095 ******** 2026-04-09 00:31:08.118086 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.118110 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.118128 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.118145 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.118162 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.118178 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.118200 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.118239 | orchestrator | 2026-04-09 00:31:08.118277 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-09 00:31:08.118309 | orchestrator | Thursday 09 April 2026 00:30:13 +0000 (0:00:01.075) 0:05:18.170 ******** 2026-04-09 00:31:08.118338 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.118369 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.118403 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.118437 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.118471 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.118507 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.118541 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.118576 | orchestrator | 2026-04-09 00:31:08.118611 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-09 00:31:08.118649 | orchestrator | Thursday 09 April 2026 00:30:21 +0000 (0:00:08.248) 0:05:26.419 ******** 2026-04-09 00:31:08.118684 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:08.118720 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.118755 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.118791 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.118857 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.118879 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.118908 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.118937 | orchestrator | 2026-04-09 00:31:08.118963 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-09 00:31:08.118988 | orchestrator | Thursday 09 April 2026 00:30:25 +0000 (0:00:03.656) 0:05:30.076 ******** 2026-04-09 00:31:08.119013 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.119039 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.119063 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.119088 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.119113 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.119139 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.119163 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.119187 | orchestrator | 2026-04-09 00:31:08.119203 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-09 00:31:08.119220 | orchestrator | Thursday 09 April 2026 00:30:26 +0000 (0:00:01.361) 0:05:31.437 ******** 2026-04-09 00:31:08.119260 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.119277 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.119293 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.119310 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.119327 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.119345 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.119365 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.119382 | orchestrator | 2026-04-09 00:31:08.119400 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-09 00:31:08.119413 | orchestrator | Thursday 09 April 2026 00:30:28 +0000 (0:00:01.381) 0:05:32.818 ******** 2026-04-09 00:31:08.119423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:08.119445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:08.119456 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:08.119466 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:08.119477 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:08.119487 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:08.119498 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:08.119508 | orchestrator | 2026-04-09 00:31:08.119519 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-09 00:31:08.119530 | orchestrator | Thursday 09 April 2026 00:30:28 +0000 (0:00:00.600) 0:05:33.419 ******** 2026-04-09 00:31:08.119541 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.119551 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.119561 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.119572 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.119582 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.119593 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.119603 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.119614 | orchestrator | 2026-04-09 00:31:08.119624 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-09 00:31:08.119664 | orchestrator | Thursday 09 April 2026 00:30:39 +0000 (0:00:10.109) 0:05:43.528 ******** 2026-04-09 00:31:08.119676 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:08.119687 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.119698 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.119708 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.119719 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.119729 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.119740 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.119750 | orchestrator | 2026-04-09 00:31:08.119761 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-09 00:31:08.119771 | orchestrator | Thursday 09 April 2026 00:30:40 +0000 (0:00:01.093) 0:05:44.622 ******** 2026-04-09 00:31:08.119782 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.119792 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.119803 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.119819 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.119875 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.119900 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.119917 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.119935 | orchestrator | 2026-04-09 00:31:08.119952 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-09 00:31:08.119970 | orchestrator | Thursday 09 April 2026 00:30:50 +0000 (0:00:09.883) 0:05:54.505 ******** 2026-04-09 00:31:08.119988 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.120006 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.120025 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.120043 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.120061 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.120075 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.120085 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.120128 | orchestrator | 2026-04-09 00:31:08.120140 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-09 00:31:08.120161 | orchestrator | Thursday 09 April 2026 00:31:01 +0000 (0:00:11.318) 0:06:05.824 ******** 2026-04-09 00:31:08.120172 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-09 00:31:08.120184 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-09 00:31:08.120195 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-09 00:31:08.120205 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-09 00:31:08.120216 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-09 00:31:08.120226 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-09 00:31:08.120237 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-09 00:31:08.120251 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-09 00:31:08.120268 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-09 00:31:08.120288 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-09 00:31:08.120306 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-09 00:31:08.120324 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-09 00:31:08.120344 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-09 00:31:08.120362 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-09 00:31:08.120382 | orchestrator | 2026-04-09 00:31:08.120394 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-09 00:31:08.120404 | orchestrator | Thursday 09 April 2026 00:31:02 +0000 (0:00:01.298) 0:06:07.122 ******** 2026-04-09 00:31:08.120415 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:08.120426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:08.120436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:08.120447 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:08.120457 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:08.120468 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:08.120479 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:08.120489 | orchestrator | 2026-04-09 00:31:08.120500 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-09 00:31:08.120511 | orchestrator | Thursday 09 April 2026 00:31:03 +0000 (0:00:00.661) 0:06:07.783 ******** 2026-04-09 00:31:08.120521 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:08.120532 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:08.120543 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:08.120553 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:08.120563 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:08.120574 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:08.120584 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:08.120595 | orchestrator | 2026-04-09 00:31:08.120606 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-09 00:31:08.120618 | orchestrator | Thursday 09 April 2026 00:31:07 +0000 (0:00:04.055) 0:06:11.838 ******** 2026-04-09 00:31:08.120628 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:08.120639 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:08.120649 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:08.120660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:08.120670 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:08.120682 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:08.120692 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:08.120703 | orchestrator | 2026-04-09 00:31:08.120714 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-09 00:31:08.120725 | orchestrator | Thursday 09 April 2026 00:31:07 +0000 (0:00:00.474) 0:06:12.313 ******** 2026-04-09 00:31:08.120736 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-09 00:31:08.120747 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-09 00:31:08.120758 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:08.120776 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-09 00:31:08.120787 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-09 00:31:08.120797 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:08.120808 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-09 00:31:08.120818 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-09 00:31:08.120853 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:08.120876 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-09 00:31:27.413926 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-09 00:31:27.414059 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:27.414072 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-09 00:31:27.414080 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-09 00:31:27.414087 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:27.414094 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-09 00:31:27.414101 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-09 00:31:27.414108 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:27.414115 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-09 00:31:27.414122 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-09 00:31:27.414129 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:27.414136 | orchestrator | 2026-04-09 00:31:27.414145 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-09 00:31:27.414153 | orchestrator | Thursday 09 April 2026 00:31:08 +0000 (0:00:00.555) 0:06:12.868 ******** 2026-04-09 00:31:27.414160 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:27.414167 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:27.414174 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:27.414180 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:27.414187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:27.414193 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:27.414200 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:27.414207 | orchestrator | 2026-04-09 00:31:27.414214 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-09 00:31:27.414221 | orchestrator | Thursday 09 April 2026 00:31:08 +0000 (0:00:00.476) 0:06:13.345 ******** 2026-04-09 00:31:27.414228 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:27.414234 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:27.414241 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:27.414247 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:27.414254 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:27.414261 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:27.414267 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:27.414274 | orchestrator | 2026-04-09 00:31:27.414281 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-09 00:31:27.414288 | orchestrator | Thursday 09 April 2026 00:31:09 +0000 (0:00:00.673) 0:06:14.018 ******** 2026-04-09 00:31:27.414295 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:27.414301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:27.414308 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:27.414314 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:27.414321 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:27.414328 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:27.414334 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:27.414341 | orchestrator | 2026-04-09 00:31:27.414348 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-09 00:31:27.414354 | orchestrator | Thursday 09 April 2026 00:31:10 +0000 (0:00:00.550) 0:06:14.568 ******** 2026-04-09 00:31:27.414361 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.414368 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.414399 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.414406 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.414412 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.414419 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.414429 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.414439 | orchestrator | 2026-04-09 00:31:27.414450 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-09 00:31:27.414459 | orchestrator | Thursday 09 April 2026 00:31:11 +0000 (0:00:01.890) 0:06:16.459 ******** 2026-04-09 00:31:27.414472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:27.414485 | orchestrator | 2026-04-09 00:31:27.414495 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-09 00:31:27.414501 | orchestrator | Thursday 09 April 2026 00:31:12 +0000 (0:00:00.839) 0:06:17.298 ******** 2026-04-09 00:31:27.414508 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.414515 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:27.414521 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:27.414528 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:27.414534 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:27.414541 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:27.414548 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:27.414554 | orchestrator | 2026-04-09 00:31:27.414561 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-09 00:31:27.414579 | orchestrator | Thursday 09 April 2026 00:31:13 +0000 (0:00:01.098) 0:06:18.397 ******** 2026-04-09 00:31:27.414586 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.414593 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:27.414599 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:27.414606 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:27.414612 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:27.414619 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:27.414625 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:27.414632 | orchestrator | 2026-04-09 00:31:27.414639 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-09 00:31:27.414645 | orchestrator | Thursday 09 April 2026 00:31:14 +0000 (0:00:00.855) 0:06:19.252 ******** 2026-04-09 00:31:27.414652 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.414658 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:27.414665 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:27.414671 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:27.414678 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:27.414685 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:27.414692 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:27.414698 | orchestrator | 2026-04-09 00:31:27.414706 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-09 00:31:27.414736 | orchestrator | Thursday 09 April 2026 00:31:16 +0000 (0:00:01.333) 0:06:20.586 ******** 2026-04-09 00:31:27.414754 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:27.414764 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.414775 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.414785 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.414797 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.414826 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.414838 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.414861 | orchestrator | 2026-04-09 00:31:27.414871 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-09 00:31:27.414881 | orchestrator | Thursday 09 April 2026 00:31:17 +0000 (0:00:01.406) 0:06:21.993 ******** 2026-04-09 00:31:27.414902 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.414912 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:27.414923 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:27.414942 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:27.414949 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:27.414955 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:27.414962 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:27.414968 | orchestrator | 2026-04-09 00:31:27.414975 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-09 00:31:27.414982 | orchestrator | Thursday 09 April 2026 00:31:18 +0000 (0:00:01.405) 0:06:23.399 ******** 2026-04-09 00:31:27.414988 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:27.414995 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:27.415001 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:27.415008 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:27.415015 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:27.415022 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:27.415029 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:27.415036 | orchestrator | 2026-04-09 00:31:27.415043 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-09 00:31:27.415050 | orchestrator | Thursday 09 April 2026 00:31:20 +0000 (0:00:01.582) 0:06:24.981 ******** 2026-04-09 00:31:27.415058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:27.415065 | orchestrator | 2026-04-09 00:31:27.415072 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-09 00:31:27.415079 | orchestrator | Thursday 09 April 2026 00:31:21 +0000 (0:00:00.852) 0:06:25.834 ******** 2026-04-09 00:31:27.415086 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.415094 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.415101 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.415108 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.415115 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.415122 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.415129 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.415136 | orchestrator | 2026-04-09 00:31:27.415143 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-09 00:31:27.415150 | orchestrator | Thursday 09 April 2026 00:31:22 +0000 (0:00:01.348) 0:06:27.183 ******** 2026-04-09 00:31:27.415157 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.415164 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.415171 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.415178 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.415186 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.415193 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.415200 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.415207 | orchestrator | 2026-04-09 00:31:27.415214 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-09 00:31:27.415221 | orchestrator | Thursday 09 April 2026 00:31:23 +0000 (0:00:01.267) 0:06:28.450 ******** 2026-04-09 00:31:27.415228 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.415235 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.415242 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.415249 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.415256 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.415264 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.415271 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.415278 | orchestrator | 2026-04-09 00:31:27.415285 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-09 00:31:27.415292 | orchestrator | Thursday 09 April 2026 00:31:25 +0000 (0:00:01.150) 0:06:29.601 ******** 2026-04-09 00:31:27.415299 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:27.415306 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:27.415313 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:27.415320 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:27.415327 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:27.415339 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:27.415346 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:27.415353 | orchestrator | 2026-04-09 00:31:27.415360 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-09 00:31:27.415367 | orchestrator | Thursday 09 April 2026 00:31:26 +0000 (0:00:01.141) 0:06:30.743 ******** 2026-04-09 00:31:27.415380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:27.415387 | orchestrator | 2026-04-09 00:31:27.415394 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:27.415401 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.837) 0:06:31.580 ******** 2026-04-09 00:31:27.415408 | orchestrator | 2026-04-09 00:31:27.415416 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:27.415423 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.064) 0:06:31.645 ******** 2026-04-09 00:31:27.415430 | orchestrator | 2026-04-09 00:31:27.415437 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:27.415444 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.182) 0:06:31.827 ******** 2026-04-09 00:31:27.415451 | orchestrator | 2026-04-09 00:31:27.415458 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:27.415472 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.040) 0:06:31.868 ******** 2026-04-09 00:31:53.632431 | orchestrator | 2026-04-09 00:31:53.632567 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:53.632593 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.056) 0:06:31.925 ******** 2026-04-09 00:31:53.632611 | orchestrator | 2026-04-09 00:31:53.632628 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:53.632644 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.044) 0:06:31.970 ******** 2026-04-09 00:31:53.632661 | orchestrator | 2026-04-09 00:31:53.632678 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-09 00:31:53.632695 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.039) 0:06:32.009 ******** 2026-04-09 00:31:53.632712 | orchestrator | 2026-04-09 00:31:53.632731 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-09 00:31:53.632748 | orchestrator | Thursday 09 April 2026 00:31:27 +0000 (0:00:00.040) 0:06:32.050 ******** 2026-04-09 00:31:53.632765 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:53.632841 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:53.632859 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:53.632876 | orchestrator | 2026-04-09 00:31:53.632893 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-09 00:31:53.632910 | orchestrator | Thursday 09 April 2026 00:31:28 +0000 (0:00:01.181) 0:06:33.231 ******** 2026-04-09 00:31:53.632927 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:53.632945 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:53.632963 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:53.632981 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:53.632999 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:53.633016 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:53.633035 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:53.633053 | orchestrator | 2026-04-09 00:31:53.633071 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-09 00:31:53.633090 | orchestrator | Thursday 09 April 2026 00:31:30 +0000 (0:00:01.387) 0:06:34.619 ******** 2026-04-09 00:31:53.633108 | orchestrator | changed: [testbed-manager] 2026-04-09 00:31:53.633125 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:53.633143 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:53.633161 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:53.633179 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:53.633229 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:53.633248 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:53.633265 | orchestrator | 2026-04-09 00:31:53.633281 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-09 00:31:53.633297 | orchestrator | Thursday 09 April 2026 00:31:31 +0000 (0:00:01.333) 0:06:35.953 ******** 2026-04-09 00:31:53.633314 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:53.633331 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:53.633348 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:53.633364 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:53.633382 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:53.633400 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:53.633418 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:53.633437 | orchestrator | 2026-04-09 00:31:53.633453 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-09 00:31:53.633470 | orchestrator | Thursday 09 April 2026 00:31:33 +0000 (0:00:02.319) 0:06:38.272 ******** 2026-04-09 00:31:53.633487 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:53.633506 | orchestrator | 2026-04-09 00:31:53.633525 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-09 00:31:53.633541 | orchestrator | Thursday 09 April 2026 00:31:33 +0000 (0:00:00.107) 0:06:38.380 ******** 2026-04-09 00:31:53.633558 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.633574 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:53.633593 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:31:53.633610 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:53.633626 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:53.633643 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:53.633659 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:53.633676 | orchestrator | 2026-04-09 00:31:53.633693 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-09 00:31:53.633712 | orchestrator | Thursday 09 April 2026 00:31:35 +0000 (0:00:01.210) 0:06:39.590 ******** 2026-04-09 00:31:53.633730 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:53.633747 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:53.633764 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:53.633804 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:53.633821 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:53.633841 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:53.633860 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:53.633878 | orchestrator | 2026-04-09 00:31:53.633897 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-09 00:31:53.633914 | orchestrator | Thursday 09 April 2026 00:31:35 +0000 (0:00:00.511) 0:06:40.102 ******** 2026-04-09 00:31:53.633932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:53.633951 | orchestrator | 2026-04-09 00:31:53.633968 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-09 00:31:53.633986 | orchestrator | Thursday 09 April 2026 00:31:36 +0000 (0:00:00.883) 0:06:40.985 ******** 2026-04-09 00:31:53.634002 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.634137 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:53.634159 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:53.634176 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:53.634194 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:53.634212 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:53.634230 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:53.634248 | orchestrator | 2026-04-09 00:31:53.634266 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-09 00:31:53.634284 | orchestrator | Thursday 09 April 2026 00:31:37 +0000 (0:00:01.023) 0:06:42.008 ******** 2026-04-09 00:31:53.634303 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-09 00:31:53.634368 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-09 00:31:53.634388 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-09 00:31:53.634406 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-09 00:31:53.634424 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-09 00:31:53.634442 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-09 00:31:53.634461 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-09 00:31:53.634479 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-09 00:31:53.634527 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-09 00:31:53.634546 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-09 00:31:53.634565 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-09 00:31:53.634583 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-09 00:31:53.634601 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-09 00:31:53.634619 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-09 00:31:53.634637 | orchestrator | 2026-04-09 00:31:53.634655 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-09 00:31:53.634674 | orchestrator | Thursday 09 April 2026 00:31:40 +0000 (0:00:02.482) 0:06:44.490 ******** 2026-04-09 00:31:53.634691 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:53.634710 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:53.634728 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:53.634745 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:53.634763 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:53.634822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:53.634841 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:53.634858 | orchestrator | 2026-04-09 00:31:53.634875 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-09 00:31:53.634893 | orchestrator | Thursday 09 April 2026 00:31:40 +0000 (0:00:00.491) 0:06:44.982 ******** 2026-04-09 00:31:53.634912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:31:53.634931 | orchestrator | 2026-04-09 00:31:53.634948 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-09 00:31:53.634967 | orchestrator | Thursday 09 April 2026 00:31:41 +0000 (0:00:00.946) 0:06:45.928 ******** 2026-04-09 00:31:53.634984 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.635002 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:53.635019 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:53.635037 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:53.635054 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:53.635071 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:53.635089 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:53.635107 | orchestrator | 2026-04-09 00:31:53.635125 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-09 00:31:53.635144 | orchestrator | Thursday 09 April 2026 00:31:42 +0000 (0:00:00.845) 0:06:46.773 ******** 2026-04-09 00:31:53.635163 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.635181 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:53.635198 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:53.635214 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:53.635233 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:53.635251 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:53.635371 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:53.635393 | orchestrator | 2026-04-09 00:31:53.635412 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-09 00:31:53.635463 | orchestrator | Thursday 09 April 2026 00:31:43 +0000 (0:00:00.773) 0:06:47.547 ******** 2026-04-09 00:31:53.635499 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:53.635519 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:53.635538 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:53.635556 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:53.635574 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:53.635592 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:53.635610 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:53.635629 | orchestrator | 2026-04-09 00:31:53.635647 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-09 00:31:53.635666 | orchestrator | Thursday 09 April 2026 00:31:43 +0000 (0:00:00.476) 0:06:48.023 ******** 2026-04-09 00:31:53.635684 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.635702 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:31:53.635719 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:31:53.635736 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:31:53.635754 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:31:53.635773 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:31:53.635864 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:31:53.635883 | orchestrator | 2026-04-09 00:31:53.635915 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-09 00:31:53.635934 | orchestrator | Thursday 09 April 2026 00:31:45 +0000 (0:00:01.456) 0:06:49.480 ******** 2026-04-09 00:31:53.635953 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:31:53.635972 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:31:53.635991 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:31:53.636009 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:31:53.636027 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:31:53.636046 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:31:53.636064 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:31:53.636083 | orchestrator | 2026-04-09 00:31:53.636102 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-09 00:31:53.636121 | orchestrator | Thursday 09 April 2026 00:31:45 +0000 (0:00:00.644) 0:06:50.125 ******** 2026-04-09 00:31:53.636139 | orchestrator | ok: [testbed-manager] 2026-04-09 00:31:53.636158 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:31:53.636177 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:31:53.636195 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:31:53.636214 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:31:53.636231 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:31:53.636266 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:27.088349 | orchestrator | 2026-04-09 00:32:27.088447 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-09 00:32:27.088460 | orchestrator | Thursday 09 April 2026 00:31:53 +0000 (0:00:08.043) 0:06:58.168 ******** 2026-04-09 00:32:27.088468 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.088478 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:27.088487 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:27.088495 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:27.088503 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:27.088511 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:27.088519 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:27.088527 | orchestrator | 2026-04-09 00:32:27.088535 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-09 00:32:27.088543 | orchestrator | Thursday 09 April 2026 00:31:55 +0000 (0:00:01.341) 0:06:59.510 ******** 2026-04-09 00:32:27.088551 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.088559 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:27.088567 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:27.088574 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:27.088582 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:27.088590 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:27.088598 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:27.088606 | orchestrator | 2026-04-09 00:32:27.088614 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-09 00:32:27.088642 | orchestrator | Thursday 09 April 2026 00:31:56 +0000 (0:00:01.828) 0:07:01.339 ******** 2026-04-09 00:32:27.088651 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.088659 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:27.088667 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:27.088674 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:27.088682 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:27.088690 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:27.088772 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:27.088780 | orchestrator | 2026-04-09 00:32:27.088788 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:32:27.088796 | orchestrator | Thursday 09 April 2026 00:31:58 +0000 (0:00:01.782) 0:07:03.121 ******** 2026-04-09 00:32:27.088804 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.088812 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.088820 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.088827 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.088835 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.088843 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.088851 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.088859 | orchestrator | 2026-04-09 00:32:27.088867 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:32:27.088875 | orchestrator | Thursday 09 April 2026 00:31:59 +0000 (0:00:00.901) 0:07:04.023 ******** 2026-04-09 00:32:27.088885 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:27.088894 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:27.088903 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:27.088913 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:27.088922 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:27.088933 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:27.088942 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:27.088952 | orchestrator | 2026-04-09 00:32:27.088962 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-09 00:32:27.088971 | orchestrator | Thursday 09 April 2026 00:32:00 +0000 (0:00:00.728) 0:07:04.752 ******** 2026-04-09 00:32:27.088980 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:27.088990 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:27.088999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:27.089008 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:27.089017 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:27.089027 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:27.089036 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:27.089046 | orchestrator | 2026-04-09 00:32:27.089055 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-09 00:32:27.089065 | orchestrator | Thursday 09 April 2026 00:32:00 +0000 (0:00:00.665) 0:07:05.418 ******** 2026-04-09 00:32:27.089075 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089085 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089094 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089103 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089113 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089131 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089139 | orchestrator | 2026-04-09 00:32:27.089147 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-09 00:32:27.089155 | orchestrator | Thursday 09 April 2026 00:32:01 +0000 (0:00:00.491) 0:07:05.910 ******** 2026-04-09 00:32:27.089163 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089171 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089179 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089186 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089194 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089202 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089222 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089239 | orchestrator | 2026-04-09 00:32:27.089247 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-09 00:32:27.089255 | orchestrator | Thursday 09 April 2026 00:32:01 +0000 (0:00:00.515) 0:07:06.425 ******** 2026-04-09 00:32:27.089263 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089271 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089278 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089286 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089294 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089302 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089309 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089317 | orchestrator | 2026-04-09 00:32:27.089325 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-09 00:32:27.089333 | orchestrator | Thursday 09 April 2026 00:32:02 +0000 (0:00:00.514) 0:07:06.940 ******** 2026-04-09 00:32:27.089341 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089349 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089357 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089364 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089372 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089380 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089387 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089395 | orchestrator | 2026-04-09 00:32:27.089417 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-09 00:32:27.089426 | orchestrator | Thursday 09 April 2026 00:32:08 +0000 (0:00:05.587) 0:07:12.527 ******** 2026-04-09 00:32:27.089434 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:27.089442 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:27.089449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:27.089457 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:27.089465 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:27.089473 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:27.089480 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:27.089488 | orchestrator | 2026-04-09 00:32:27.089496 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-09 00:32:27.089504 | orchestrator | Thursday 09 April 2026 00:32:08 +0000 (0:00:00.685) 0:07:13.212 ******** 2026-04-09 00:32:27.089514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:27.089523 | orchestrator | 2026-04-09 00:32:27.089532 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-09 00:32:27.089540 | orchestrator | Thursday 09 April 2026 00:32:09 +0000 (0:00:00.791) 0:07:14.004 ******** 2026-04-09 00:32:27.089547 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089555 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089563 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089571 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089579 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089587 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089594 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089602 | orchestrator | 2026-04-09 00:32:27.089610 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-09 00:32:27.089618 | orchestrator | Thursday 09 April 2026 00:32:11 +0000 (0:00:02.241) 0:07:16.245 ******** 2026-04-09 00:32:27.089626 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089633 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089641 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089649 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089656 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089664 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089671 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089679 | orchestrator | 2026-04-09 00:32:27.089687 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-09 00:32:27.089721 | orchestrator | Thursday 09 April 2026 00:32:13 +0000 (0:00:01.341) 0:07:17.587 ******** 2026-04-09 00:32:27.089729 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:27.089737 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:27.089745 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:27.089753 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:27.089760 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:27.089768 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:27.089776 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:27.089783 | orchestrator | 2026-04-09 00:32:27.089791 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-09 00:32:27.089799 | orchestrator | Thursday 09 April 2026 00:32:13 +0000 (0:00:00.808) 0:07:18.395 ******** 2026-04-09 00:32:27.089807 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089816 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089824 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089832 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089840 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089848 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089856 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-09 00:32:27.089863 | orchestrator | 2026-04-09 00:32:27.089872 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-09 00:32:27.089884 | orchestrator | Thursday 09 April 2026 00:32:15 +0000 (0:00:01.771) 0:07:20.166 ******** 2026-04-09 00:32:27.089893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:27.089901 | orchestrator | 2026-04-09 00:32:27.089909 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-09 00:32:27.089917 | orchestrator | Thursday 09 April 2026 00:32:16 +0000 (0:00:00.949) 0:07:21.115 ******** 2026-04-09 00:32:27.089924 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:27.089932 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:27.089940 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:27.089948 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:27.089956 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:27.089964 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:27.089972 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:27.089979 | orchestrator | 2026-04-09 00:32:27.089992 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-09 00:32:59.184989 | orchestrator | Thursday 09 April 2026 00:32:27 +0000 (0:00:10.428) 0:07:31.544 ******** 2026-04-09 00:32:59.185127 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:59.185155 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:59.185175 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:59.185194 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:59.185213 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:59.185229 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:59.185240 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:59.185251 | orchestrator | 2026-04-09 00:32:59.185263 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-09 00:32:59.185305 | orchestrator | Thursday 09 April 2026 00:32:28 +0000 (0:00:01.896) 0:07:33.440 ******** 2026-04-09 00:32:59.185325 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:59.185343 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:59.185360 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:59.185378 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:59.185398 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:59.185415 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:59.185435 | orchestrator | 2026-04-09 00:32:59.185453 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-09 00:32:59.185471 | orchestrator | Thursday 09 April 2026 00:32:30 +0000 (0:00:01.713) 0:07:35.154 ******** 2026-04-09 00:32:59.185483 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.185495 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.185508 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.185520 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.185533 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.185545 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.185559 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.185571 | orchestrator | 2026-04-09 00:32:59.185585 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-09 00:32:59.185597 | orchestrator | 2026-04-09 00:32:59.185608 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-09 00:32:59.185618 | orchestrator | Thursday 09 April 2026 00:32:31 +0000 (0:00:01.271) 0:07:36.426 ******** 2026-04-09 00:32:59.185629 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:59.185640 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:59.185651 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:59.185662 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:59.185704 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:59.185715 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:59.185725 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:59.185736 | orchestrator | 2026-04-09 00:32:59.185747 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-09 00:32:59.185758 | orchestrator | 2026-04-09 00:32:59.185769 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-09 00:32:59.185780 | orchestrator | Thursday 09 April 2026 00:32:32 +0000 (0:00:00.470) 0:07:36.897 ******** 2026-04-09 00:32:59.185790 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.185801 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.185812 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.185822 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.185833 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.185844 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.185854 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.185865 | orchestrator | 2026-04-09 00:32:59.185876 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-09 00:32:59.185887 | orchestrator | Thursday 09 April 2026 00:32:33 +0000 (0:00:01.331) 0:07:38.228 ******** 2026-04-09 00:32:59.185897 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:59.185908 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:59.185919 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:59.185930 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:59.185941 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:59.185951 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:59.185962 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:59.185973 | orchestrator | 2026-04-09 00:32:59.185984 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-09 00:32:59.186004 | orchestrator | Thursday 09 April 2026 00:32:35 +0000 (0:00:01.769) 0:07:39.998 ******** 2026-04-09 00:32:59.186103 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:32:59.186124 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:32:59.186143 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:32:59.186158 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:32:59.186187 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:32:59.186205 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:32:59.186224 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:32:59.186242 | orchestrator | 2026-04-09 00:32:59.186261 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-09 00:32:59.186281 | orchestrator | Thursday 09 April 2026 00:32:36 +0000 (0:00:00.526) 0:07:40.525 ******** 2026-04-09 00:32:59.186299 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:59.186321 | orchestrator | 2026-04-09 00:32:59.186357 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-09 00:32:59.186376 | orchestrator | Thursday 09 April 2026 00:32:36 +0000 (0:00:00.869) 0:07:41.394 ******** 2026-04-09 00:32:59.186398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:59.186419 | orchestrator | 2026-04-09 00:32:59.186439 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-09 00:32:59.186525 | orchestrator | Thursday 09 April 2026 00:32:38 +0000 (0:00:01.088) 0:07:42.482 ******** 2026-04-09 00:32:59.186551 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.186571 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.186589 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.186600 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.186611 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.186622 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.186632 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.186643 | orchestrator | 2026-04-09 00:32:59.186718 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-09 00:32:59.186732 | orchestrator | Thursday 09 April 2026 00:32:47 +0000 (0:00:09.552) 0:07:52.035 ******** 2026-04-09 00:32:59.186743 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.186754 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.186765 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.186776 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.186786 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.186797 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.186808 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.186832 | orchestrator | 2026-04-09 00:32:59.186843 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-09 00:32:59.186908 | orchestrator | Thursday 09 April 2026 00:32:48 +0000 (0:00:00.852) 0:07:52.887 ******** 2026-04-09 00:32:59.186922 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.186933 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.186944 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.186954 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.186965 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.186976 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.186988 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.187008 | orchestrator | 2026-04-09 00:32:59.187026 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-09 00:32:59.187044 | orchestrator | Thursday 09 April 2026 00:32:49 +0000 (0:00:01.391) 0:07:54.278 ******** 2026-04-09 00:32:59.187063 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.187082 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.187100 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.187119 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.187138 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.187157 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.187175 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.187195 | orchestrator | 2026-04-09 00:32:59.187214 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-09 00:32:59.187245 | orchestrator | Thursday 09 April 2026 00:32:51 +0000 (0:00:02.013) 0:07:56.292 ******** 2026-04-09 00:32:59.187257 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.187268 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.187279 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.187289 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.187300 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.187310 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.187321 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.187331 | orchestrator | 2026-04-09 00:32:59.187342 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-09 00:32:59.187353 | orchestrator | Thursday 09 April 2026 00:32:53 +0000 (0:00:01.268) 0:07:57.560 ******** 2026-04-09 00:32:59.187363 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.187374 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.187384 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.187394 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.187405 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.187415 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.187426 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.187436 | orchestrator | 2026-04-09 00:32:59.187447 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-09 00:32:59.187458 | orchestrator | 2026-04-09 00:32:59.187468 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-09 00:32:59.187479 | orchestrator | Thursday 09 April 2026 00:32:54 +0000 (0:00:01.267) 0:07:58.828 ******** 2026-04-09 00:32:59.187490 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:59.187501 | orchestrator | 2026-04-09 00:32:59.187512 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:32:59.187522 | orchestrator | Thursday 09 April 2026 00:32:55 +0000 (0:00:00.954) 0:07:59.782 ******** 2026-04-09 00:32:59.187533 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:59.187544 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:59.187554 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:59.187565 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:59.187576 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:59.187587 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:59.187597 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:59.187608 | orchestrator | 2026-04-09 00:32:59.187619 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:32:59.187629 | orchestrator | Thursday 09 April 2026 00:32:56 +0000 (0:00:00.830) 0:08:00.612 ******** 2026-04-09 00:32:59.187640 | orchestrator | changed: [testbed-manager] 2026-04-09 00:32:59.187651 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:32:59.187662 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:32:59.187864 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:32:59.187879 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:32:59.187890 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:32:59.187900 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:32:59.187911 | orchestrator | 2026-04-09 00:32:59.187922 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-09 00:32:59.187933 | orchestrator | Thursday 09 April 2026 00:32:57 +0000 (0:00:01.309) 0:08:01.922 ******** 2026-04-09 00:32:59.187944 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:32:59.187955 | orchestrator | 2026-04-09 00:32:59.187965 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-09 00:32:59.187976 | orchestrator | Thursday 09 April 2026 00:32:58 +0000 (0:00:00.900) 0:08:02.822 ******** 2026-04-09 00:32:59.187987 | orchestrator | ok: [testbed-manager] 2026-04-09 00:32:59.187998 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:32:59.188021 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:32:59.188032 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:32:59.188043 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:32:59.188053 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:32:59.188064 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:32:59.188074 | orchestrator | 2026-04-09 00:32:59.188100 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-09 00:33:01.117749 | orchestrator | Thursday 09 April 2026 00:32:59 +0000 (0:00:00.816) 0:08:03.639 ******** 2026-04-09 00:33:01.117872 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:01.117899 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:01.117921 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:01.117939 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:01.117959 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:01.117981 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:01.118001 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:01.118093 | orchestrator | 2026-04-09 00:33:01.118115 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:33:01.118136 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-09 00:33:01.118156 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:33:01.118175 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:33:01.118195 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-09 00:33:01.118212 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:33:01.118243 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:33:01.118264 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-09 00:33:01.118285 | orchestrator | 2026-04-09 00:33:01.118305 | orchestrator | 2026-04-09 00:33:01.118324 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:33:01.118344 | orchestrator | Thursday 09 April 2026 00:33:00 +0000 (0:00:01.470) 0:08:05.109 ******** 2026-04-09 00:33:01.118365 | orchestrator | =============================================================================== 2026-04-09 00:33:01.118385 | orchestrator | osism.commons.packages : Install required packages --------------------- 71.50s 2026-04-09 00:33:01.118406 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.58s 2026-04-09 00:33:01.118426 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 30.91s 2026-04-09 00:33:01.118445 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.68s 2026-04-09 00:33:01.118486 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.44s 2026-04-09 00:33:01.118521 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.32s 2026-04-09 00:33:01.118540 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.11s 2026-04-09 00:33:01.118559 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.43s 2026-04-09 00:33:01.118579 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.11s 2026-04-09 00:33:01.118595 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.88s 2026-04-09 00:33:01.118613 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.55s 2026-04-09 00:33:01.118693 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.13s 2026-04-09 00:33:01.118714 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.25s 2026-04-09 00:33:01.118733 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.13s 2026-04-09 00:33:01.118751 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.04s 2026-04-09 00:33:01.118769 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.77s 2026-04-09 00:33:01.118787 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.05s 2026-04-09 00:33:01.118805 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.19s 2026-04-09 00:33:01.118845 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.84s 2026-04-09 00:33:01.118865 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.75s 2026-04-09 00:33:01.370742 | orchestrator | + osism apply fail2ban 2026-04-09 00:33:13.139708 | orchestrator | 2026-04-09 00:33:13 | INFO  | Prepare task for execution of fail2ban. 2026-04-09 00:33:13.228298 | orchestrator | 2026-04-09 00:33:13 | INFO  | Task cfbf0a1b-f01a-42a8-a970-55693390412a (fail2ban) was prepared for execution. 2026-04-09 00:33:13.228419 | orchestrator | 2026-04-09 00:33:13 | INFO  | It takes a moment until task cfbf0a1b-f01a-42a8-a970-55693390412a (fail2ban) has been started and output is visible here. 2026-04-09 00:33:34.488554 | orchestrator | 2026-04-09 00:33:34.488779 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-09 00:33:34.488799 | orchestrator | 2026-04-09 00:33:34.488808 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-09 00:33:34.488819 | orchestrator | Thursday 09 April 2026 00:33:16 +0000 (0:00:00.358) 0:00:00.358 ******** 2026-04-09 00:33:34.488832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:33:34.488856 | orchestrator | 2026-04-09 00:33:34.488867 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-09 00:33:34.488878 | orchestrator | Thursday 09 April 2026 00:33:18 +0000 (0:00:01.243) 0:00:01.601 ******** 2026-04-09 00:33:34.488889 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:34.488899 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:34.488905 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:34.488911 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:34.488917 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:34.488923 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:34.488929 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:34.488935 | orchestrator | 2026-04-09 00:33:34.488941 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-09 00:33:34.488948 | orchestrator | Thursday 09 April 2026 00:33:29 +0000 (0:00:11.578) 0:00:13.179 ******** 2026-04-09 00:33:34.488953 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:34.488960 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:34.488965 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:34.488971 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:34.488977 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:34.488983 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:34.488989 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:34.488995 | orchestrator | 2026-04-09 00:33:34.489001 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-09 00:33:34.489007 | orchestrator | Thursday 09 April 2026 00:33:31 +0000 (0:00:01.602) 0:00:14.782 ******** 2026-04-09 00:33:34.489014 | orchestrator | ok: [testbed-manager] 2026-04-09 00:33:34.489024 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:33:34.489033 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:33:34.489071 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:33:34.489081 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:33:34.489089 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:33:34.489098 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:33:34.489108 | orchestrator | 2026-04-09 00:33:34.489117 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-09 00:33:34.489128 | orchestrator | Thursday 09 April 2026 00:33:32 +0000 (0:00:01.239) 0:00:16.021 ******** 2026-04-09 00:33:34.489138 | orchestrator | changed: [testbed-manager] 2026-04-09 00:33:34.489148 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:33:34.489158 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:33:34.489168 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:33:34.489176 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:33:34.489182 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:33:34.489189 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:33:34.489196 | orchestrator | 2026-04-09 00:33:34.489202 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:33:34.489209 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489218 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489225 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489232 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489239 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489245 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489252 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:33:34.489258 | orchestrator | 2026-04-09 00:33:34.489265 | orchestrator | 2026-04-09 00:33:34.489271 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:33:34.489278 | orchestrator | Thursday 09 April 2026 00:33:34 +0000 (0:00:01.648) 0:00:17.670 ******** 2026-04-09 00:33:34.489297 | orchestrator | =============================================================================== 2026-04-09 00:33:34.489304 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.58s 2026-04-09 00:33:34.489311 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-04-09 00:33:34.489317 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.60s 2026-04-09 00:33:34.489324 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.24s 2026-04-09 00:33:34.489331 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.24s 2026-04-09 00:33:34.657675 | orchestrator | + osism apply network 2026-04-09 00:33:45.988389 | orchestrator | 2026-04-09 00:33:45 | INFO  | Prepare task for execution of network. 2026-04-09 00:33:46.063558 | orchestrator | 2026-04-09 00:33:46 | INFO  | Task 61d67a25-01c1-4072-99cb-f96a95930ad3 (network) was prepared for execution. 2026-04-09 00:33:46.063718 | orchestrator | 2026-04-09 00:33:46 | INFO  | It takes a moment until task 61d67a25-01c1-4072-99cb-f96a95930ad3 (network) has been started and output is visible here. 2026-04-09 00:34:14.208608 | orchestrator | 2026-04-09 00:34:14.208726 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-09 00:34:14.208740 | orchestrator | 2026-04-09 00:34:14.208748 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-09 00:34:14.208775 | orchestrator | Thursday 09 April 2026 00:33:49 +0000 (0:00:00.329) 0:00:00.329 ******** 2026-04-09 00:34:14.208782 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.208790 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.208797 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.208804 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.208810 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.208817 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.208823 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.208830 | orchestrator | 2026-04-09 00:34:14.208837 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-09 00:34:14.208843 | orchestrator | Thursday 09 April 2026 00:33:50 +0000 (0:00:00.598) 0:00:00.928 ******** 2026-04-09 00:34:14.208852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:14.208861 | orchestrator | 2026-04-09 00:34:14.208868 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-09 00:34:14.208874 | orchestrator | Thursday 09 April 2026 00:33:51 +0000 (0:00:01.145) 0:00:02.073 ******** 2026-04-09 00:34:14.208881 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.208887 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.208894 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.208900 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.208908 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.208914 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.208921 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.208928 | orchestrator | 2026-04-09 00:34:14.208934 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-09 00:34:14.208941 | orchestrator | Thursday 09 April 2026 00:33:53 +0000 (0:00:02.616) 0:00:04.690 ******** 2026-04-09 00:34:14.208948 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.208954 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.208961 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.208967 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.208974 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.208980 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.208986 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.208993 | orchestrator | 2026-04-09 00:34:14.209000 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-09 00:34:14.209006 | orchestrator | Thursday 09 April 2026 00:33:55 +0000 (0:00:01.608) 0:00:06.299 ******** 2026-04-09 00:34:14.209013 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-09 00:34:14.209020 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-09 00:34:14.209026 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-09 00:34:14.209033 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-09 00:34:14.209039 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-09 00:34:14.209046 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-09 00:34:14.209052 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-09 00:34:14.209059 | orchestrator | 2026-04-09 00:34:14.209065 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-09 00:34:14.209072 | orchestrator | Thursday 09 April 2026 00:33:56 +0000 (0:00:01.155) 0:00:07.454 ******** 2026-04-09 00:34:14.209079 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:34:14.209086 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:34:14.209093 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:34:14.209100 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:34:14.209106 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:34:14.209113 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:34:14.209121 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:34:14.209129 | orchestrator | 2026-04-09 00:34:14.209142 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-09 00:34:14.209149 | orchestrator | Thursday 09 April 2026 00:34:00 +0000 (0:00:03.453) 0:00:10.907 ******** 2026-04-09 00:34:14.209157 | orchestrator | changed: [testbed-manager] 2026-04-09 00:34:14.209165 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:14.209172 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:14.209180 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:14.209188 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:14.209198 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:14.209210 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:14.209221 | orchestrator | 2026-04-09 00:34:14.209233 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-09 00:34:14.209259 | orchestrator | Thursday 09 April 2026 00:34:01 +0000 (0:00:01.695) 0:00:12.602 ******** 2026-04-09 00:34:14.209271 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:34:14.209282 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:34:14.209293 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:34:14.209302 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:34:14.209313 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:34:14.209324 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:34:14.209334 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:34:14.209344 | orchestrator | 2026-04-09 00:34:14.209355 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-09 00:34:14.209366 | orchestrator | Thursday 09 April 2026 00:34:03 +0000 (0:00:01.983) 0:00:14.586 ******** 2026-04-09 00:34:14.209377 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.209386 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.209396 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.209406 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.209416 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.209426 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.209435 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.209446 | orchestrator | 2026-04-09 00:34:14.209458 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-09 00:34:14.209489 | orchestrator | Thursday 09 April 2026 00:34:04 +0000 (0:00:00.917) 0:00:15.504 ******** 2026-04-09 00:34:14.209503 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:14.209514 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:14.209525 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:14.209534 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:14.209563 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:14.209570 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:14.209577 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:14.209583 | orchestrator | 2026-04-09 00:34:14.209590 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-09 00:34:14.209597 | orchestrator | Thursday 09 April 2026 00:34:05 +0000 (0:00:00.777) 0:00:16.282 ******** 2026-04-09 00:34:14.209604 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.209610 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.209617 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.209623 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.209630 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.209637 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.209643 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.209650 | orchestrator | 2026-04-09 00:34:14.209657 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-09 00:34:14.209664 | orchestrator | Thursday 09 April 2026 00:34:07 +0000 (0:00:02.190) 0:00:18.472 ******** 2026-04-09 00:34:14.209670 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:14.209677 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:14.209684 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:14.209691 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:14.209697 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:14.209713 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:14.209721 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-09 00:34:14.209729 | orchestrator | 2026-04-09 00:34:14.209735 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-09 00:34:14.209742 | orchestrator | Thursday 09 April 2026 00:34:08 +0000 (0:00:00.880) 0:00:19.353 ******** 2026-04-09 00:34:14.209749 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.209756 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:34:14.209762 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:34:14.209769 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:34:14.209775 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:34:14.209782 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:34:14.209789 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:34:14.209795 | orchestrator | 2026-04-09 00:34:14.209802 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-09 00:34:14.209809 | orchestrator | Thursday 09 April 2026 00:34:09 +0000 (0:00:01.416) 0:00:20.770 ******** 2026-04-09 00:34:14.209816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:14.209824 | orchestrator | 2026-04-09 00:34:14.209831 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:34:14.209838 | orchestrator | Thursday 09 April 2026 00:34:11 +0000 (0:00:01.217) 0:00:21.987 ******** 2026-04-09 00:34:14.209845 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.209851 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.209858 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.209865 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.209871 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.209878 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.209884 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.209891 | orchestrator | 2026-04-09 00:34:14.209898 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-09 00:34:14.209904 | orchestrator | Thursday 09 April 2026 00:34:12 +0000 (0:00:01.250) 0:00:23.237 ******** 2026-04-09 00:34:14.209911 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:14.209918 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:14.209924 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:14.209931 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:14.209937 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:14.209944 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:14.209950 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:14.209961 | orchestrator | 2026-04-09 00:34:14.209972 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:34:14.209984 | orchestrator | Thursday 09 April 2026 00:34:13 +0000 (0:00:00.781) 0:00:24.019 ******** 2026-04-09 00:34:14.209995 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210006 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210082 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210102 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210110 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210116 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210123 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210129 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210136 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210143 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-09 00:34:14.210156 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210163 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210169 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210176 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-09 00:34:14.210182 | orchestrator | 2026-04-09 00:34:14.210197 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-09 00:34:29.391291 | orchestrator | Thursday 09 April 2026 00:34:14 +0000 (0:00:01.028) 0:00:25.048 ******** 2026-04-09 00:34:29.391399 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:29.391416 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:29.391427 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:29.391438 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:29.391449 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:29.391460 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:29.391471 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:29.391482 | orchestrator | 2026-04-09 00:34:29.391495 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-09 00:34:29.391506 | orchestrator | Thursday 09 April 2026 00:34:14 +0000 (0:00:00.792) 0:00:25.840 ******** 2026-04-09 00:34:29.391562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-04-09 00:34:29.391576 | orchestrator | 2026-04-09 00:34:29.391588 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-09 00:34:29.391599 | orchestrator | Thursday 09 April 2026 00:34:19 +0000 (0:00:04.384) 0:00:30.225 ******** 2026-04-09 00:34:29.391611 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:34:29.391626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391672 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:34:29.391684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:34:29.391767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:34:29.391779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:34:29.391821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:34:29.391836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:34:29.391850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:34:29.391863 | orchestrator | 2026-04-09 00:34:29.391876 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-09 00:34:29.391889 | orchestrator | Thursday 09 April 2026 00:34:24 +0000 (0:00:05.105) 0:00:35.330 ******** 2026-04-09 00:34:29.391902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391915 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-09 00:34:29.391928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.391968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-09 00:34:29.391981 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-09 00:34:29.392003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.392016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-09 00:34:29.392035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-09 00:34:29.392047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-09 00:34:29.392058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-09 00:34:29.392081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-09 00:34:41.106364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-09 00:34:41.106480 | orchestrator | 2026-04-09 00:34:41.106545 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-09 00:34:41.106559 | orchestrator | Thursday 09 April 2026 00:34:29 +0000 (0:00:05.126) 0:00:40.457 ******** 2026-04-09 00:34:41.106572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:41.106584 | orchestrator | 2026-04-09 00:34:41.106596 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-09 00:34:41.106607 | orchestrator | Thursday 09 April 2026 00:34:30 +0000 (0:00:01.103) 0:00:41.561 ******** 2026-04-09 00:34:41.106618 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:41.106629 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:41.106640 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:41.106651 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:41.106661 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:41.106672 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:41.106683 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:41.106693 | orchestrator | 2026-04-09 00:34:41.106704 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-09 00:34:41.106715 | orchestrator | Thursday 09 April 2026 00:34:31 +0000 (0:00:00.995) 0:00:42.557 ******** 2026-04-09 00:34:41.106726 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.106738 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.106749 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.106783 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.106794 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.106806 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.106817 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.106828 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.106838 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.106849 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.106859 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.106870 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.106881 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.106892 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.106906 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.106919 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.106932 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.106944 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.106957 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.106969 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.106982 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.106994 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.107006 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.107019 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.107031 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.107042 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.107054 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.107067 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.107079 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.107091 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.107103 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-09 00:34:41.107115 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-09 00:34:41.107127 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-09 00:34:41.107139 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-09 00:34:41.107151 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.107164 | orchestrator | 2026-04-09 00:34:41.107176 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-09 00:34:41.107205 | orchestrator | Thursday 09 April 2026 00:34:32 +0000 (0:00:00.681) 0:00:43.238 ******** 2026-04-09 00:34:41.107219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:34:41.107232 | orchestrator | 2026-04-09 00:34:41.107245 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-09 00:34:41.107265 | orchestrator | Thursday 09 April 2026 00:34:33 +0000 (0:00:01.098) 0:00:44.337 ******** 2026-04-09 00:34:41.107276 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.107287 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.107298 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.107309 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.107337 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.107348 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.107359 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.107370 | orchestrator | 2026-04-09 00:34:41.107380 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-09 00:34:41.107391 | orchestrator | Thursday 09 April 2026 00:34:34 +0000 (0:00:00.691) 0:00:45.029 ******** 2026-04-09 00:34:41.107402 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.107413 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.107423 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.107434 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.107444 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.107455 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.107465 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.107476 | orchestrator | 2026-04-09 00:34:41.107507 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-09 00:34:41.107518 | orchestrator | Thursday 09 April 2026 00:34:34 +0000 (0:00:00.567) 0:00:45.597 ******** 2026-04-09 00:34:41.107529 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.107540 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.107551 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.107561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.107579 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.107598 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.107616 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.107646 | orchestrator | 2026-04-09 00:34:41.107667 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-09 00:34:41.107684 | orchestrator | Thursday 09 April 2026 00:34:35 +0000 (0:00:00.644) 0:00:46.241 ******** 2026-04-09 00:34:41.107702 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:41.107719 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:41.107735 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:41.107752 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:41.107770 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:41.107787 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:41.107805 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:41.107823 | orchestrator | 2026-04-09 00:34:41.107841 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-09 00:34:41.107858 | orchestrator | Thursday 09 April 2026 00:34:36 +0000 (0:00:01.510) 0:00:47.752 ******** 2026-04-09 00:34:41.107875 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:41.107893 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:41.107911 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:41.107929 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:41.107949 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:41.107968 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:41.107986 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:41.108006 | orchestrator | 2026-04-09 00:34:41.108025 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-09 00:34:41.108040 | orchestrator | Thursday 09 April 2026 00:34:38 +0000 (0:00:01.183) 0:00:48.935 ******** 2026-04-09 00:34:41.108050 | orchestrator | ok: [testbed-manager] 2026-04-09 00:34:41.108061 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:34:41.108076 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:34:41.108087 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:34:41.108098 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:34:41.108109 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:34:41.108120 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:34:41.108130 | orchestrator | 2026-04-09 00:34:41.108152 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-09 00:34:41.108164 | orchestrator | Thursday 09 April 2026 00:34:40 +0000 (0:00:01.929) 0:00:50.865 ******** 2026-04-09 00:34:41.108174 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.108185 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.108196 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.108207 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.108226 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.108237 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.108248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.108259 | orchestrator | 2026-04-09 00:34:41.108270 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-09 00:34:41.108281 | orchestrator | Thursday 09 April 2026 00:34:40 +0000 (0:00:00.538) 0:00:51.404 ******** 2026-04-09 00:34:41.108291 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:34:41.108302 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:34:41.108313 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:34:41.108324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:34:41.108334 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:34:41.108345 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:34:41.108356 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:34:41.108366 | orchestrator | 2026-04-09 00:34:41.108377 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:34:41.108390 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-09 00:34:41.108402 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.108425 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.265985 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.266139 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.266155 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.266168 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-09 00:34:41.266180 | orchestrator | 2026-04-09 00:34:41.266192 | orchestrator | 2026-04-09 00:34:41.266203 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:34:41.266215 | orchestrator | Thursday 09 April 2026 00:34:41 +0000 (0:00:00.541) 0:00:51.945 ******** 2026-04-09 00:34:41.266226 | orchestrator | =============================================================================== 2026-04-09 00:34:41.266237 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.13s 2026-04-09 00:34:41.266248 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.11s 2026-04-09 00:34:41.266259 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.38s 2026-04-09 00:34:41.266270 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.45s 2026-04-09 00:34:41.266280 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.62s 2026-04-09 00:34:41.266291 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.19s 2026-04-09 00:34:41.266302 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.98s 2026-04-09 00:34:41.266313 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.93s 2026-04-09 00:34:41.266353 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.70s 2026-04-09 00:34:41.266365 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2026-04-09 00:34:41.266375 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.51s 2026-04-09 00:34:41.266386 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.42s 2026-04-09 00:34:41.266397 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2026-04-09 00:34:41.266408 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-04-09 00:34:41.266419 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.18s 2026-04-09 00:34:41.266429 | orchestrator | osism.commons.network : Create required directories --------------------- 1.16s 2026-04-09 00:34:41.266440 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2026-04-09 00:34:41.266451 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-04-09 00:34:41.266462 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.10s 2026-04-09 00:34:41.266472 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.03s 2026-04-09 00:34:41.408907 | orchestrator | + osism apply wireguard 2026-04-09 00:34:52.566776 | orchestrator | 2026-04-09 00:34:52 | INFO  | Prepare task for execution of wireguard. 2026-04-09 00:34:52.642222 | orchestrator | 2026-04-09 00:34:52 | INFO  | Task 12bb00ae-92e7-4c15-a275-96cbc7b5921d (wireguard) was prepared for execution. 2026-04-09 00:34:52.642347 | orchestrator | 2026-04-09 00:34:52 | INFO  | It takes a moment until task 12bb00ae-92e7-4c15-a275-96cbc7b5921d (wireguard) has been started and output is visible here. 2026-04-09 00:35:11.690762 | orchestrator | 2026-04-09 00:35:11.690875 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-09 00:35:11.690888 | orchestrator | 2026-04-09 00:35:11.690896 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-09 00:35:11.690905 | orchestrator | Thursday 09 April 2026 00:34:56 +0000 (0:00:00.310) 0:00:00.310 ******** 2026-04-09 00:35:11.690913 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:11.690923 | orchestrator | 2026-04-09 00:35:11.690931 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-09 00:35:11.690939 | orchestrator | Thursday 09 April 2026 00:34:57 +0000 (0:00:01.807) 0:00:02.118 ******** 2026-04-09 00:35:11.690947 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.690955 | orchestrator | 2026-04-09 00:35:11.690963 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-09 00:35:11.690971 | orchestrator | Thursday 09 April 2026 00:35:04 +0000 (0:00:06.396) 0:00:08.514 ******** 2026-04-09 00:35:11.690979 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.690987 | orchestrator | 2026-04-09 00:35:11.690995 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-09 00:35:11.691003 | orchestrator | Thursday 09 April 2026 00:35:04 +0000 (0:00:00.531) 0:00:09.045 ******** 2026-04-09 00:35:11.691011 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.691019 | orchestrator | 2026-04-09 00:35:11.691027 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-09 00:35:11.691035 | orchestrator | Thursday 09 April 2026 00:35:05 +0000 (0:00:00.438) 0:00:09.484 ******** 2026-04-09 00:35:11.691043 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:11.691052 | orchestrator | 2026-04-09 00:35:11.691060 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-09 00:35:11.691068 | orchestrator | Thursday 09 April 2026 00:35:05 +0000 (0:00:00.526) 0:00:10.011 ******** 2026-04-09 00:35:11.691076 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:11.691083 | orchestrator | 2026-04-09 00:35:11.691092 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-09 00:35:11.691119 | orchestrator | Thursday 09 April 2026 00:35:06 +0000 (0:00:00.431) 0:00:10.442 ******** 2026-04-09 00:35:11.691133 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:11.691146 | orchestrator | 2026-04-09 00:35:11.691159 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-09 00:35:11.691171 | orchestrator | Thursday 09 April 2026 00:35:06 +0000 (0:00:00.420) 0:00:10.863 ******** 2026-04-09 00:35:11.691183 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.691195 | orchestrator | 2026-04-09 00:35:11.691208 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-09 00:35:11.691221 | orchestrator | Thursday 09 April 2026 00:35:07 +0000 (0:00:01.182) 0:00:12.046 ******** 2026-04-09 00:35:11.691234 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-09 00:35:11.691246 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.691259 | orchestrator | 2026-04-09 00:35:11.691273 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-09 00:35:11.691286 | orchestrator | Thursday 09 April 2026 00:35:08 +0000 (0:00:00.911) 0:00:12.957 ******** 2026-04-09 00:35:11.691299 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.691312 | orchestrator | 2026-04-09 00:35:11.691326 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-09 00:35:11.691337 | orchestrator | Thursday 09 April 2026 00:35:10 +0000 (0:00:01.902) 0:00:14.859 ******** 2026-04-09 00:35:11.691347 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:11.691356 | orchestrator | 2026-04-09 00:35:11.691365 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:35:11.691375 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:35:11.691385 | orchestrator | 2026-04-09 00:35:11.691395 | orchestrator | 2026-04-09 00:35:11.691404 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:35:11.691413 | orchestrator | Thursday 09 April 2026 00:35:11 +0000 (0:00:00.898) 0:00:15.758 ******** 2026-04-09 00:35:11.691423 | orchestrator | =============================================================================== 2026-04-09 00:35:11.691432 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.40s 2026-04-09 00:35:11.691442 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.90s 2026-04-09 00:35:11.691474 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.81s 2026-04-09 00:35:11.691484 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-04-09 00:35:11.691493 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-04-09 00:35:11.691503 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-04-09 00:35:11.691512 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-04-09 00:35:11.691521 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-09 00:35:11.691530 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-04-09 00:35:11.691539 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-04-09 00:35:11.691549 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-04-09 00:35:11.858904 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-09 00:35:11.891734 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-09 00:35:11.891812 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-09 00:35:11.967129 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 186 0 --:--:-- --:--:-- --:--:-- 186 2026-04-09 00:35:11.981512 | orchestrator | + osism apply --environment custom workarounds 2026-04-09 00:35:13.247619 | orchestrator | 2026-04-09 00:35:13 | INFO  | Trying to run play workarounds in environment custom 2026-04-09 00:35:23.415045 | orchestrator | 2026-04-09 00:35:23 | INFO  | Prepare task for execution of workarounds. 2026-04-09 00:35:23.494692 | orchestrator | 2026-04-09 00:35:23 | INFO  | Task db9597f3-4646-4959-8f91-7529edd26917 (workarounds) was prepared for execution. 2026-04-09 00:35:23.494797 | orchestrator | 2026-04-09 00:35:23 | INFO  | It takes a moment until task db9597f3-4646-4959-8f91-7529edd26917 (workarounds) has been started and output is visible here. 2026-04-09 00:35:48.105395 | orchestrator | 2026-04-09 00:35:48.105520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:35:48.105530 | orchestrator | 2026-04-09 00:35:48.105535 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-09 00:35:48.105541 | orchestrator | Thursday 09 April 2026 00:35:26 +0000 (0:00:00.180) 0:00:00.180 ******** 2026-04-09 00:35:48.105548 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105557 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105566 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105575 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105584 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105593 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105602 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-09 00:35:48.105612 | orchestrator | 2026-04-09 00:35:48.105621 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-09 00:35:48.105629 | orchestrator | 2026-04-09 00:35:48.105638 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:35:48.105648 | orchestrator | Thursday 09 April 2026 00:35:27 +0000 (0:00:00.745) 0:00:00.925 ******** 2026-04-09 00:35:48.105656 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.105662 | orchestrator | 2026-04-09 00:35:48.105667 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-09 00:35:48.105673 | orchestrator | 2026-04-09 00:35:48.105678 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-09 00:35:48.105684 | orchestrator | Thursday 09 April 2026 00:35:30 +0000 (0:00:02.628) 0:00:03.554 ******** 2026-04-09 00:35:48.105689 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.105695 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.105700 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.105705 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.105710 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.105715 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.105720 | orchestrator | 2026-04-09 00:35:48.105726 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-09 00:35:48.105731 | orchestrator | 2026-04-09 00:35:48.105737 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-09 00:35:48.105742 | orchestrator | Thursday 09 April 2026 00:35:32 +0000 (0:00:02.332) 0:00:05.887 ******** 2026-04-09 00:35:48.105748 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105754 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105759 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105764 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105769 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105774 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-09 00:35:48.105797 | orchestrator | 2026-04-09 00:35:48.105803 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-09 00:35:48.105808 | orchestrator | Thursday 09 April 2026 00:35:33 +0000 (0:00:01.367) 0:00:07.254 ******** 2026-04-09 00:35:48.105813 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.105818 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.105823 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.105828 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.105833 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.105838 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.105843 | orchestrator | 2026-04-09 00:35:48.105849 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-09 00:35:48.105854 | orchestrator | Thursday 09 April 2026 00:35:37 +0000 (0:00:03.839) 0:00:11.093 ******** 2026-04-09 00:35:48.105859 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.105864 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.105869 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.105874 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.105879 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.105884 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.105889 | orchestrator | 2026-04-09 00:35:48.105894 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-09 00:35:48.105899 | orchestrator | 2026-04-09 00:35:48.105904 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-09 00:35:48.105909 | orchestrator | Thursday 09 April 2026 00:35:38 +0000 (0:00:00.522) 0:00:11.616 ******** 2026-04-09 00:35:48.105915 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:48.105922 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.105931 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.105938 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.105960 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.105970 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.105977 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.105983 | orchestrator | 2026-04-09 00:35:48.105990 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-09 00:35:48.105997 | orchestrator | Thursday 09 April 2026 00:35:39 +0000 (0:00:01.718) 0:00:13.335 ******** 2026-04-09 00:35:48.106006 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:48.106064 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.106077 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.106086 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.106096 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.106106 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.106132 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.106140 | orchestrator | 2026-04-09 00:35:48.106147 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-09 00:35:48.106154 | orchestrator | Thursday 09 April 2026 00:35:41 +0000 (0:00:01.453) 0:00:14.788 ******** 2026-04-09 00:35:48.106161 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.106167 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.106174 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.106181 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.106188 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.106195 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.106202 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.106210 | orchestrator | 2026-04-09 00:35:48.106220 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-09 00:35:48.106230 | orchestrator | Thursday 09 April 2026 00:35:42 +0000 (0:00:01.697) 0:00:16.485 ******** 2026-04-09 00:35:48.106240 | orchestrator | changed: [testbed-manager] 2026-04-09 00:35:48.106250 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:35:48.106260 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:35:48.106269 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:35:48.106279 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:35:48.106295 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:35:48.106302 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:35:48.106312 | orchestrator | 2026-04-09 00:35:48.106322 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-09 00:35:48.106332 | orchestrator | Thursday 09 April 2026 00:35:44 +0000 (0:00:01.704) 0:00:18.190 ******** 2026-04-09 00:35:48.106342 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:35:48.106351 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:35:48.106361 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:35:48.106370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:35:48.106380 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:35:48.106389 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:35:48.106399 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:35:48.106425 | orchestrator | 2026-04-09 00:35:48.106435 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-09 00:35:48.106446 | orchestrator | 2026-04-09 00:35:48.106456 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-09 00:35:48.106465 | orchestrator | Thursday 09 April 2026 00:35:45 +0000 (0:00:00.742) 0:00:18.932 ******** 2026-04-09 00:35:48.106472 | orchestrator | ok: [testbed-manager] 2026-04-09 00:35:48.106481 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:35:48.106490 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:35:48.106499 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:35:48.106508 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:35:48.106516 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:35:48.106525 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:35:48.106534 | orchestrator | 2026-04-09 00:35:48.106542 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:35:48.106554 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:35:48.106567 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106577 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106587 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106596 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106607 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106614 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:35:48.106619 | orchestrator | 2026-04-09 00:35:48.106625 | orchestrator | 2026-04-09 00:35:48.106631 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:35:48.106637 | orchestrator | Thursday 09 April 2026 00:35:48 +0000 (0:00:02.679) 0:00:21.612 ******** 2026-04-09 00:35:48.106642 | orchestrator | =============================================================================== 2026-04-09 00:35:48.106648 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2026-04-09 00:35:48.106654 | orchestrator | Install python3-docker -------------------------------------------------- 2.68s 2026-04-09 00:35:48.106660 | orchestrator | Apply netplan configuration --------------------------------------------- 2.63s 2026-04-09 00:35:48.106666 | orchestrator | Apply netplan configuration --------------------------------------------- 2.33s 2026-04-09 00:35:48.106671 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2026-04-09 00:35:48.106688 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2026-04-09 00:35:48.106694 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.70s 2026-04-09 00:35:48.106700 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.45s 2026-04-09 00:35:48.106705 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.37s 2026-04-09 00:35:48.106711 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2026-04-09 00:35:48.106717 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.74s 2026-04-09 00:35:48.106731 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.52s 2026-04-09 00:35:48.558138 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:35:59.915529 | orchestrator | 2026-04-09 00:35:59 | INFO  | Prepare task for execution of reboot. 2026-04-09 00:35:59.989928 | orchestrator | 2026-04-09 00:35:59 | INFO  | Task 92038bcb-4e47-4086-a2c8-68b327f998bd (reboot) was prepared for execution. 2026-04-09 00:35:59.990065 | orchestrator | 2026-04-09 00:35:59 | INFO  | It takes a moment until task 92038bcb-4e47-4086-a2c8-68b327f998bd (reboot) has been started and output is visible here. 2026-04-09 00:36:10.574857 | orchestrator | 2026-04-09 00:36:10.574987 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575008 | orchestrator | 2026-04-09 00:36:10.575020 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575032 | orchestrator | Thursday 09 April 2026 00:36:02 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-04-09 00:36:10.575043 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:10.575055 | orchestrator | 2026-04-09 00:36:10.575066 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575077 | orchestrator | Thursday 09 April 2026 00:36:03 +0000 (0:00:00.120) 0:00:00.345 ******** 2026-04-09 00:36:10.575088 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:36:10.575099 | orchestrator | 2026-04-09 00:36:10.575110 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.575121 | orchestrator | Thursday 09 April 2026 00:36:04 +0000 (0:00:01.199) 0:00:01.545 ******** 2026-04-09 00:36:10.575131 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:36:10.575142 | orchestrator | 2026-04-09 00:36:10.575153 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575164 | orchestrator | 2026-04-09 00:36:10.575174 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575185 | orchestrator | Thursday 09 April 2026 00:36:04 +0000 (0:00:00.092) 0:00:01.637 ******** 2026-04-09 00:36:10.575196 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:10.575207 | orchestrator | 2026-04-09 00:36:10.575217 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575228 | orchestrator | Thursday 09 April 2026 00:36:04 +0000 (0:00:00.091) 0:00:01.729 ******** 2026-04-09 00:36:10.575239 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:36:10.575249 | orchestrator | 2026-04-09 00:36:10.575260 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.575271 | orchestrator | Thursday 09 April 2026 00:36:05 +0000 (0:00:01.008) 0:00:02.737 ******** 2026-04-09 00:36:10.575282 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:36:10.575293 | orchestrator | 2026-04-09 00:36:10.575303 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575314 | orchestrator | 2026-04-09 00:36:10.575325 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575335 | orchestrator | Thursday 09 April 2026 00:36:05 +0000 (0:00:00.106) 0:00:02.843 ******** 2026-04-09 00:36:10.575346 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:10.575357 | orchestrator | 2026-04-09 00:36:10.575368 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575447 | orchestrator | Thursday 09 April 2026 00:36:05 +0000 (0:00:00.088) 0:00:02.932 ******** 2026-04-09 00:36:10.575462 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:36:10.575475 | orchestrator | 2026-04-09 00:36:10.575488 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.575501 | orchestrator | Thursday 09 April 2026 00:36:06 +0000 (0:00:01.007) 0:00:03.940 ******** 2026-04-09 00:36:10.575513 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:36:10.575526 | orchestrator | 2026-04-09 00:36:10.575538 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575550 | orchestrator | 2026-04-09 00:36:10.575562 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575575 | orchestrator | Thursday 09 April 2026 00:36:06 +0000 (0:00:00.111) 0:00:04.052 ******** 2026-04-09 00:36:10.575587 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:10.575600 | orchestrator | 2026-04-09 00:36:10.575613 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575626 | orchestrator | Thursday 09 April 2026 00:36:06 +0000 (0:00:00.100) 0:00:04.152 ******** 2026-04-09 00:36:10.575638 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:36:10.575650 | orchestrator | 2026-04-09 00:36:10.575663 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.575675 | orchestrator | Thursday 09 April 2026 00:36:07 +0000 (0:00:01.040) 0:00:05.192 ******** 2026-04-09 00:36:10.575687 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:36:10.575700 | orchestrator | 2026-04-09 00:36:10.575713 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575725 | orchestrator | 2026-04-09 00:36:10.575737 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575750 | orchestrator | Thursday 09 April 2026 00:36:07 +0000 (0:00:00.113) 0:00:05.305 ******** 2026-04-09 00:36:10.575761 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:10.575772 | orchestrator | 2026-04-09 00:36:10.575798 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575810 | orchestrator | Thursday 09 April 2026 00:36:08 +0000 (0:00:00.205) 0:00:05.510 ******** 2026-04-09 00:36:10.575821 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:36:10.575831 | orchestrator | 2026-04-09 00:36:10.575842 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.575853 | orchestrator | Thursday 09 April 2026 00:36:09 +0000 (0:00:01.006) 0:00:06.517 ******** 2026-04-09 00:36:10.575864 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:36:10.575875 | orchestrator | 2026-04-09 00:36:10.575885 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-09 00:36:10.575896 | orchestrator | 2026-04-09 00:36:10.575907 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-09 00:36:10.575917 | orchestrator | Thursday 09 April 2026 00:36:09 +0000 (0:00:00.109) 0:00:06.626 ******** 2026-04-09 00:36:10.575928 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:10.575939 | orchestrator | 2026-04-09 00:36:10.575949 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-09 00:36:10.575960 | orchestrator | Thursday 09 April 2026 00:36:09 +0000 (0:00:00.083) 0:00:06.710 ******** 2026-04-09 00:36:10.575971 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:36:10.575982 | orchestrator | 2026-04-09 00:36:10.575993 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-09 00:36:10.576003 | orchestrator | Thursday 09 April 2026 00:36:10 +0000 (0:00:00.971) 0:00:07.681 ******** 2026-04-09 00:36:10.576033 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:36:10.576044 | orchestrator | 2026-04-09 00:36:10.576055 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:36:10.576067 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576087 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576098 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576109 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576119 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576130 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:36:10.576141 | orchestrator | 2026-04-09 00:36:10.576151 | orchestrator | 2026-04-09 00:36:10.576162 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:36:10.576173 | orchestrator | Thursday 09 April 2026 00:36:10 +0000 (0:00:00.033) 0:00:07.715 ******** 2026-04-09 00:36:10.576184 | orchestrator | =============================================================================== 2026-04-09 00:36:10.576194 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.23s 2026-04-09 00:36:10.576205 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2026-04-09 00:36:10.576216 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2026-04-09 00:36:10.695580 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-09 00:36:21.971107 | orchestrator | 2026-04-09 00:36:21 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-09 00:36:22.044296 | orchestrator | 2026-04-09 00:36:22 | INFO  | Task 06c3851b-5a21-44dc-99a4-26ecf622aa69 (wait-for-connection) was prepared for execution. 2026-04-09 00:36:22.044438 | orchestrator | 2026-04-09 00:36:22 | INFO  | It takes a moment until task 06c3851b-5a21-44dc-99a4-26ecf622aa69 (wait-for-connection) has been started and output is visible here. 2026-04-09 00:36:37.078746 | orchestrator | 2026-04-09 00:36:37.078889 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-09 00:36:37.078918 | orchestrator | 2026-04-09 00:36:37.078937 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-09 00:36:37.078957 | orchestrator | Thursday 09 April 2026 00:36:25 +0000 (0:00:00.329) 0:00:00.329 ******** 2026-04-09 00:36:37.078976 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:36:37.078996 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:36:37.079015 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:36:37.079033 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:36:37.079051 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:36:37.079062 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:36:37.079073 | orchestrator | 2026-04-09 00:36:37.079084 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:36:37.079097 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079109 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079120 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079131 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079142 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079182 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:36:37.079194 | orchestrator | 2026-04-09 00:36:37.079205 | orchestrator | 2026-04-09 00:36:37.079216 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:36:37.079227 | orchestrator | Thursday 09 April 2026 00:36:36 +0000 (0:00:11.589) 0:00:11.919 ******** 2026-04-09 00:36:37.079238 | orchestrator | =============================================================================== 2026-04-09 00:36:37.079249 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2026-04-09 00:36:37.200990 | orchestrator | + osism apply hddtemp 2026-04-09 00:36:48.483571 | orchestrator | 2026-04-09 00:36:48 | INFO  | Prepare task for execution of hddtemp. 2026-04-09 00:36:48.555092 | orchestrator | 2026-04-09 00:36:48 | INFO  | Task b6537d0c-2152-44d9-aa17-1b36f8e84fb4 (hddtemp) was prepared for execution. 2026-04-09 00:36:48.555181 | orchestrator | 2026-04-09 00:36:48 | INFO  | It takes a moment until task b6537d0c-2152-44d9-aa17-1b36f8e84fb4 (hddtemp) has been started and output is visible here. 2026-04-09 00:37:15.558812 | orchestrator | 2026-04-09 00:37:15.558943 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-09 00:37:15.558967 | orchestrator | 2026-04-09 00:37:15.558982 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-09 00:37:15.558998 | orchestrator | Thursday 09 April 2026 00:36:51 +0000 (0:00:00.295) 0:00:00.295 ******** 2026-04-09 00:37:15.559011 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:15.559022 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:15.559030 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:15.559039 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:15.559048 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:15.559056 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:15.559065 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:15.559074 | orchestrator | 2026-04-09 00:37:15.559083 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-09 00:37:15.559092 | orchestrator | Thursday 09 April 2026 00:36:52 +0000 (0:00:00.498) 0:00:00.793 ******** 2026-04-09 00:37:15.559102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:37:15.559114 | orchestrator | 2026-04-09 00:37:15.559123 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-09 00:37:15.559132 | orchestrator | Thursday 09 April 2026 00:36:52 +0000 (0:00:00.832) 0:00:01.626 ******** 2026-04-09 00:37:15.559140 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:15.559149 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:15.559158 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:15.559166 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:15.559175 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:15.559183 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:15.559198 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:15.559207 | orchestrator | 2026-04-09 00:37:15.559234 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-09 00:37:15.559243 | orchestrator | Thursday 09 April 2026 00:36:55 +0000 (0:00:02.264) 0:00:03.890 ******** 2026-04-09 00:37:15.559253 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:15.559263 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:15.559272 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:15.559281 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:15.559290 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:15.559298 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:15.559307 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:15.559345 | orchestrator | 2026-04-09 00:37:15.559356 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-09 00:37:15.559389 | orchestrator | Thursday 09 April 2026 00:36:55 +0000 (0:00:00.867) 0:00:04.757 ******** 2026-04-09 00:37:15.559399 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:37:15.559410 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:37:15.559420 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:37:15.559430 | orchestrator | ok: [testbed-manager] 2026-04-09 00:37:15.559440 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:37:15.559450 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:37:15.559460 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:37:15.559471 | orchestrator | 2026-04-09 00:37:15.559481 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-09 00:37:15.559491 | orchestrator | Thursday 09 April 2026 00:36:57 +0000 (0:00:01.792) 0:00:06.550 ******** 2026-04-09 00:37:15.559501 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:37:15.559511 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:37:15.559522 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:37:15.559531 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:37:15.559539 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:37:15.559548 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:15.559556 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:37:15.559565 | orchestrator | 2026-04-09 00:37:15.559573 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-09 00:37:15.559582 | orchestrator | Thursday 09 April 2026 00:36:58 +0000 (0:00:00.568) 0:00:07.118 ******** 2026-04-09 00:37:15.559591 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:15.559599 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:15.559607 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:15.559616 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:15.559624 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:15.559633 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:15.559642 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:15.559650 | orchestrator | 2026-04-09 00:37:15.559664 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-09 00:37:15.559673 | orchestrator | Thursday 09 April 2026 00:37:12 +0000 (0:00:13.939) 0:00:21.057 ******** 2026-04-09 00:37:15.559682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:37:15.559691 | orchestrator | 2026-04-09 00:37:15.559699 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-09 00:37:15.559708 | orchestrator | Thursday 09 April 2026 00:37:13 +0000 (0:00:01.161) 0:00:22.219 ******** 2026-04-09 00:37:15.559716 | orchestrator | changed: [testbed-manager] 2026-04-09 00:37:15.559725 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:37:15.559733 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:37:15.559742 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:37:15.559750 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:37:15.559759 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:37:15.559767 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:37:15.559776 | orchestrator | 2026-04-09 00:37:15.559784 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:37:15.559793 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:37:15.559821 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559831 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559840 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559855 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559864 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559872 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-09 00:37:15.559881 | orchestrator | 2026-04-09 00:37:15.559890 | orchestrator | 2026-04-09 00:37:15.559898 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:37:15.559907 | orchestrator | Thursday 09 April 2026 00:37:15 +0000 (0:00:01.921) 0:00:24.140 ******** 2026-04-09 00:37:15.559916 | orchestrator | =============================================================================== 2026-04-09 00:37:15.559924 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.94s 2026-04-09 00:37:15.559933 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.26s 2026-04-09 00:37:15.559941 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2026-04-09 00:37:15.559950 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.79s 2026-04-09 00:37:15.559959 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-04-09 00:37:15.559967 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.87s 2026-04-09 00:37:15.559976 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.83s 2026-04-09 00:37:15.559984 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.57s 2026-04-09 00:37:15.559993 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2026-04-09 00:37:15.694466 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-09 00:37:15.741239 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:37:15.741412 | orchestrator | + sudo systemctl restart manager.service 2026-04-09 00:37:33.203861 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-09 00:37:33.203969 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-09 00:37:33.203985 | orchestrator | + local max_attempts=60 2026-04-09 00:37:33.203998 | orchestrator | + local name=ceph-ansible 2026-04-09 00:37:33.204010 | orchestrator | + local attempt_num=1 2026-04-09 00:37:33.204022 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:33.243187 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:33.243281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:33.243341 | orchestrator | + sleep 5 2026-04-09 00:37:38.248658 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:38.297259 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:38.297415 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:38.297431 | orchestrator | + sleep 5 2026-04-09 00:37:43.300755 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:43.342717 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:43.342820 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:43.342835 | orchestrator | + sleep 5 2026-04-09 00:37:48.346487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:48.378450 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:48.378545 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:48.378558 | orchestrator | + sleep 5 2026-04-09 00:37:53.381956 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:53.418349 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:53.418456 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:53.418470 | orchestrator | + sleep 5 2026-04-09 00:37:58.422381 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:37:58.456787 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:37:58.456931 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:37:58.456949 | orchestrator | + sleep 5 2026-04-09 00:38:03.462430 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:03.501818 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:03.501932 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:03.501955 | orchestrator | + sleep 5 2026-04-09 00:38:08.506157 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:08.547193 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:08.547282 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:08.547292 | orchestrator | + sleep 5 2026-04-09 00:38:13.550670 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:13.582863 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:13.582958 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:13.582973 | orchestrator | + sleep 5 2026-04-09 00:38:18.586332 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:18.622651 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:18.622750 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:18.622767 | orchestrator | + sleep 5 2026-04-09 00:38:23.626630 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:23.655290 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:23.655386 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:23.655404 | orchestrator | + sleep 5 2026-04-09 00:38:28.659825 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:28.696375 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:28.696466 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:28.696480 | orchestrator | + sleep 5 2026-04-09 00:38:33.700815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:33.736193 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:33.736341 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-09 00:38:33.736358 | orchestrator | + sleep 5 2026-04-09 00:38:38.741208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-09 00:38:38.779623 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:38.779711 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-09 00:38:38.779726 | orchestrator | + local max_attempts=60 2026-04-09 00:38:38.779738 | orchestrator | + local name=kolla-ansible 2026-04-09 00:38:38.779750 | orchestrator | + local attempt_num=1 2026-04-09 00:38:38.780688 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-09 00:38:38.818006 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:38.818158 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-09 00:38:38.818176 | orchestrator | + local max_attempts=60 2026-04-09 00:38:38.818189 | orchestrator | + local name=osism-ansible 2026-04-09 00:38:38.818201 | orchestrator | + local attempt_num=1 2026-04-09 00:38:38.819542 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-09 00:38:38.855692 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-09 00:38:38.855810 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-09 00:38:38.855827 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-09 00:38:38.999056 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-09 00:38:39.126010 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-09 00:38:39.290452 | orchestrator | ARA in osism-ansible already disabled. 2026-04-09 00:38:39.430551 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-09 00:38:39.430657 | orchestrator | + osism apply gather-facts 2026-04-09 00:38:50.747854 | orchestrator | 2026-04-09 00:38:50 | INFO  | Prepare task for execution of gather-facts. 2026-04-09 00:38:50.810391 | orchestrator | 2026-04-09 00:38:50 | INFO  | Task bb8bb48b-746f-4963-aa33-d2c5fd99fc54 (gather-facts) was prepared for execution. 2026-04-09 00:38:50.810482 | orchestrator | 2026-04-09 00:38:50 | INFO  | It takes a moment until task bb8bb48b-746f-4963-aa33-d2c5fd99fc54 (gather-facts) has been started and output is visible here. 2026-04-09 00:38:54.308173 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-09 00:38:54.308325 | orchestrator | -vvvv to see details 2026-04-09 00:38:54.308343 | orchestrator | 2026-04-09 00:38:54.308356 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:38:54.308396 | orchestrator | 2026-04-09 00:38:54.308409 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:38:54.308424 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308437 | orchestrator | ...ignoring 2026-04-09 00:38:54.308449 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308461 | orchestrator | ...ignoring 2026-04-09 00:38:54.308472 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308484 | orchestrator | ...ignoring 2026-04-09 00:38:54.308495 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.13: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308506 | orchestrator | ...ignoring 2026-04-09 00:38:54.308532 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308543 | orchestrator | ...ignoring 2026-04-09 00:38:54.308555 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308565 | orchestrator | ...ignoring 2026-04-09 00:38:54.308577 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-09 00:38:54.308588 | orchestrator | ...ignoring 2026-04-09 00:38:54.308599 | orchestrator | 2026-04-09 00:38:54.308610 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:38:54.308621 | orchestrator | 2026-04-09 00:38:54.308632 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:38:54.308643 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:38:54.308655 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:38:54.308666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:38:54.308677 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:38:54.308689 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:38:54.308702 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:38:54.308715 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:38:54.308727 | orchestrator | 2026-04-09 00:38:54.308740 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:38:54.308754 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308775 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308788 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308802 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308834 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308848 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308860 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:38:54.308872 | orchestrator | 2026-04-09 00:38:54.418331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-09 00:38:54.426936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-09 00:38:54.443304 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-09 00:38:54.451659 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-09 00:38:54.467692 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-09 00:38:54.477670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-09 00:38:54.487878 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-09 00:38:54.506308 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-09 00:38:54.523994 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-09 00:38:54.539211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-09 00:38:54.557392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-09 00:38:54.573293 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-09 00:38:54.591463 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-09 00:38:54.603466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-09 00:38:54.623071 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-09 00:38:54.638467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-09 00:38:54.654409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-09 00:38:54.673111 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-09 00:38:54.689439 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-09 00:38:54.707518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-09 00:38:54.726668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-09 00:38:54.746369 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-09 00:38:54.763710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-09 00:38:54.782338 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-09 00:38:55.082156 | orchestrator | ok: Runtime: 0:24:04.993334 2026-04-09 00:38:55.191225 | 2026-04-09 00:38:55.191362 | TASK [Deploy services] 2026-04-09 00:38:55.724426 | orchestrator | skipping: Conditional result was False 2026-04-09 00:38:55.742519 | 2026-04-09 00:38:55.742751 | TASK [Deploy in a nutshell] 2026-04-09 00:38:56.461586 | orchestrator | + set -e 2026-04-09 00:38:56.463025 | orchestrator | 2026-04-09 00:38:56.463078 | orchestrator | # PULL IMAGES 2026-04-09 00:38:56.463090 | orchestrator | 2026-04-09 00:38:56.463109 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-09 00:38:56.463126 | orchestrator | ++ export INTERACTIVE=false 2026-04-09 00:38:56.463139 | orchestrator | ++ INTERACTIVE=false 2026-04-09 00:38:56.463175 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-09 00:38:56.463194 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-09 00:38:56.463205 | orchestrator | + source /opt/manager-vars.sh 2026-04-09 00:38:56.463215 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-09 00:38:56.463267 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-09 00:38:56.463277 | orchestrator | ++ export CEPH_VERSION= 2026-04-09 00:38:56.463292 | orchestrator | ++ CEPH_VERSION= 2026-04-09 00:38:56.463306 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-09 00:38:56.463329 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-09 00:38:56.463343 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-09 00:38:56.463368 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-09 00:38:56.463385 | orchestrator | ++ export OPENSTACK_VERSION= 2026-04-09 00:38:56.463401 | orchestrator | ++ OPENSTACK_VERSION= 2026-04-09 00:38:56.463421 | orchestrator | ++ export ARA=false 2026-04-09 00:38:56.463436 | orchestrator | ++ ARA=false 2026-04-09 00:38:56.463446 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-09 00:38:56.463455 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-09 00:38:56.463465 | orchestrator | ++ export TEMPEST=true 2026-04-09 00:38:56.463474 | orchestrator | ++ TEMPEST=true 2026-04-09 00:38:56.463483 | orchestrator | ++ export IS_ZUUL=true 2026-04-09 00:38:56.463492 | orchestrator | ++ IS_ZUUL=true 2026-04-09 00:38:56.463501 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:38:56.463510 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.59 2026-04-09 00:38:56.463520 | orchestrator | ++ export EXTERNAL_API=false 2026-04-09 00:38:56.463529 | orchestrator | ++ EXTERNAL_API=false 2026-04-09 00:38:56.463537 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-09 00:38:56.463546 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-09 00:38:56.463556 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-09 00:38:56.463565 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-09 00:38:56.463574 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-09 00:38:56.463583 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-09 00:38:56.463592 | orchestrator | + echo 2026-04-09 00:38:56.463601 | orchestrator | + echo '# PULL IMAGES' 2026-04-09 00:38:56.463610 | orchestrator | + echo 2026-04-09 00:38:56.463635 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-09 00:38:56.518909 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-09 00:38:56.519016 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-09 00:38:57.585625 | orchestrator | 2026-04-09 00:38:57 | INFO  | Trying to run play pull-images in environment custom 2026-04-09 00:39:07.646492 | orchestrator | 2026-04-09 00:39:07 | INFO  | Prepare task for execution of pull-images. 2026-04-09 00:39:07.716184 | orchestrator | 2026-04-09 00:39:07 | INFO  | Task 5af34f48-acc9-44bc-8bf9-237de01c6f58 (pull-images) was prepared for execution. 2026-04-09 00:39:07.716314 | orchestrator | 2026-04-09 00:39:07 | INFO  | Task 5af34f48-acc9-44bc-8bf9-237de01c6f58 is running in background. No more output. Check ARA for logs. 2026-04-09 00:39:09.007676 | orchestrator | 2026-04-09 00:39:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-09 00:39:19.058656 | orchestrator | 2026-04-09 00:39:19 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-09 00:39:19.134903 | orchestrator | 2026-04-09 00:39:19 | INFO  | Task 2c25ef1b-0f12-445d-8d0f-28848e20e1ed (wipe-partitions) was prepared for execution. 2026-04-09 00:39:19.135009 | orchestrator | 2026-04-09 00:39:19 | INFO  | It takes a moment until task 2c25ef1b-0f12-445d-8d0f-28848e20e1ed (wipe-partitions) has been started and output is visible here. 2026-04-09 00:39:30.358637 | orchestrator | 2026-04-09 00:39:30.358797 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-09 00:39:30.358826 | orchestrator | 2026-04-09 00:39:30.358848 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-09 00:39:30.358890 | orchestrator | Thursday 09 April 2026 00:39:22 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-04-09 00:39:30.358923 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:30.358982 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:30.359005 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:30.359024 | orchestrator | 2026-04-09 00:39:30.359044 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-09 00:39:30.359064 | orchestrator | Thursday 09 April 2026 00:39:23 +0000 (0:00:01.170) 0:00:01.328 ******** 2026-04-09 00:39:30.359084 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:39:30.359109 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:39:30.359129 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:39:30.359148 | orchestrator | 2026-04-09 00:39:30.359167 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-09 00:39:30.359188 | orchestrator | Thursday 09 April 2026 00:39:23 +0000 (0:00:00.229) 0:00:01.557 ******** 2026-04-09 00:39:30.359208 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:30.359227 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:30.359276 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:30.359296 | orchestrator | 2026-04-09 00:39:30.359316 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-09 00:39:30.359337 | orchestrator | Thursday 09 April 2026 00:39:24 +0000 (0:00:00.501) 0:00:02.058 ******** 2026-04-09 00:39:30.359357 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:39:30.359378 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:39:30.359399 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:39:30.359419 | orchestrator | 2026-04-09 00:39:30.359437 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-09 00:39:30.359457 | orchestrator | Thursday 09 April 2026 00:39:24 +0000 (0:00:00.216) 0:00:02.275 ******** 2026-04-09 00:39:30.359477 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:39:30.359503 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:39:30.359523 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:39:30.359542 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:39:30.359560 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:39:30.359579 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:39:30.359597 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:39:30.359617 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:39:30.359635 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:39:30.359654 | orchestrator | 2026-04-09 00:39:30.359666 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-09 00:39:30.359678 | orchestrator | Thursday 09 April 2026 00:39:25 +0000 (0:00:01.264) 0:00:03.539 ******** 2026-04-09 00:39:30.359690 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:39:30.359701 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:39:30.359713 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:39:30.359724 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:39:30.359735 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:39:30.359747 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:39:30.359757 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:39:30.359769 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:39:30.359780 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:39:30.359792 | orchestrator | 2026-04-09 00:39:30.359803 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-09 00:39:30.359814 | orchestrator | Thursday 09 April 2026 00:39:26 +0000 (0:00:01.394) 0:00:04.934 ******** 2026-04-09 00:39:30.359834 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-09 00:39:30.359845 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-09 00:39:30.359856 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-09 00:39:30.359867 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-09 00:39:30.359879 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-09 00:39:30.359902 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-09 00:39:30.359913 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-09 00:39:30.359925 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-09 00:39:30.359936 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-09 00:39:30.359947 | orchestrator | 2026-04-09 00:39:30.359958 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-09 00:39:30.359970 | orchestrator | Thursday 09 April 2026 00:39:29 +0000 (0:00:02.077) 0:00:07.012 ******** 2026-04-09 00:39:30.359982 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:30.359993 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:30.360004 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:30.360016 | orchestrator | 2026-04-09 00:39:30.360027 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-09 00:39:30.360038 | orchestrator | Thursday 09 April 2026 00:39:29 +0000 (0:00:00.565) 0:00:07.577 ******** 2026-04-09 00:39:30.360049 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:39:30.360061 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:39:30.360072 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:39:30.360083 | orchestrator | 2026-04-09 00:39:30.360096 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:39:30.360108 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:30.360121 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:30.360156 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:30.360168 | orchestrator | 2026-04-09 00:39:30.360179 | orchestrator | 2026-04-09 00:39:30.360191 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:39:30.360202 | orchestrator | Thursday 09 April 2026 00:39:30 +0000 (0:00:00.574) 0:00:08.151 ******** 2026-04-09 00:39:30.360213 | orchestrator | =============================================================================== 2026-04-09 00:39:30.360225 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.08s 2026-04-09 00:39:30.360236 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2026-04-09 00:39:30.360329 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-04-09 00:39:30.360349 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.17s 2026-04-09 00:39:30.360361 | orchestrator | Request device events from the kernel ----------------------------------- 0.57s 2026-04-09 00:39:30.360373 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-04-09 00:39:30.360384 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.50s 2026-04-09 00:39:30.360395 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2026-04-09 00:39:30.360407 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-09 00:39:41.620996 | orchestrator | 2026-04-09 00:39:41 | INFO  | Prepare task for execution of facts. 2026-04-09 00:39:41.685748 | orchestrator | 2026-04-09 00:39:41 | INFO  | Task 33d96dcf-f4a4-4e30-a06a-fba7ac7f70b5 (facts) was prepared for execution. 2026-04-09 00:39:41.685835 | orchestrator | 2026-04-09 00:39:41 | INFO  | It takes a moment until task 33d96dcf-f4a4-4e30-a06a-fba7ac7f70b5 (facts) has been started and output is visible here. 2026-04-09 00:39:52.951938 | orchestrator | 2026-04-09 00:39:52.952066 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:39:52.952087 | orchestrator | 2026-04-09 00:39:52.952103 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:39:52.952146 | orchestrator | Thursday 09 April 2026 00:39:44 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-04-09 00:39:52.952159 | orchestrator | ok: [testbed-manager] 2026-04-09 00:39:52.952172 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:39:52.952183 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:52.952194 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:52.952205 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:39:52.952216 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:52.952227 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:39:52.952238 | orchestrator | 2026-04-09 00:39:52.952250 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:39:52.952323 | orchestrator | Thursday 09 April 2026 00:39:45 +0000 (0:00:01.254) 0:00:01.535 ******** 2026-04-09 00:39:52.952336 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:39:52.952348 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:39:52.952360 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:39:52.952371 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:39:52.952382 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:39:52.952393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:39:52.952405 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:39:52.952416 | orchestrator | 2026-04-09 00:39:52.952428 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:39:52.952439 | orchestrator | 2026-04-09 00:39:52.952451 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:39:52.952462 | orchestrator | Thursday 09 April 2026 00:39:46 +0000 (0:00:01.099) 0:00:02.634 ******** 2026-04-09 00:39:52.952493 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:39:52.952507 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:39:52.952520 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:39:52.952533 | orchestrator | ok: [testbed-manager] 2026-04-09 00:39:52.952546 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:39:52.952558 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:39:52.952571 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:39:52.952583 | orchestrator | 2026-04-09 00:39:52.952597 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:39:52.952609 | orchestrator | 2026-04-09 00:39:52.952622 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:39:52.952635 | orchestrator | Thursday 09 April 2026 00:39:52 +0000 (0:00:05.192) 0:00:07.827 ******** 2026-04-09 00:39:52.952648 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:39:52.952661 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:39:52.952674 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:39:52.952687 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:39:52.952700 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:39:52.952713 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:39:52.952726 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:39:52.952739 | orchestrator | 2026-04-09 00:39:52.952752 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:39:52.952765 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952779 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952793 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952806 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952820 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952831 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952852 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:39:52.952863 | orchestrator | 2026-04-09 00:39:52.952875 | orchestrator | 2026-04-09 00:39:52.952886 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:39:52.952897 | orchestrator | Thursday 09 April 2026 00:39:52 +0000 (0:00:00.513) 0:00:08.340 ******** 2026-04-09 00:39:52.952909 | orchestrator | =============================================================================== 2026-04-09 00:39:52.952920 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.19s 2026-04-09 00:39:52.952931 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-04-09 00:39:52.952942 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-04-09 00:39:52.952955 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-04-09 00:39:54.431911 | orchestrator | 2026-04-09 00:39:54 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-09 00:39:54.495630 | orchestrator | 2026-04-09 00:39:54 | INFO  | Task 2f9cb7b9-9c54-4a30-b3b2-d0506a5d777c (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-09 00:39:54.495747 | orchestrator | 2026-04-09 00:39:54 | INFO  | It takes a moment until task 2f9cb7b9-9c54-4a30-b3b2-d0506a5d777c (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-09 00:40:04.976989 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:40:04.977124 | orchestrator | 2.16.14 2026-04-09 00:40:04.977155 | orchestrator | 2026-04-09 00:40:04.977178 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:40:04.977192 | orchestrator | 2026-04-09 00:40:04.977204 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:40:04.977217 | orchestrator | Thursday 09 April 2026 00:39:58 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-04-09 00:40:04.977230 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:04.977242 | orchestrator | 2026-04-09 00:40:04.977254 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:40:04.977298 | orchestrator | Thursday 09 April 2026 00:39:59 +0000 (0:00:00.214) 0:00:00.476 ******** 2026-04-09 00:40:04.977311 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:04.977324 | orchestrator | 2026-04-09 00:40:04.977336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977348 | orchestrator | Thursday 09 April 2026 00:39:59 +0000 (0:00:00.192) 0:00:00.669 ******** 2026-04-09 00:40:04.977359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:40:04.977371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:40:04.977383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:40:04.977406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:40:04.977418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:40:04.977430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:40:04.977441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:40:04.977453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:40:04.977465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:40:04.977476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:40:04.977487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:40:04.977518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:40:04.977530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:40:04.977542 | orchestrator | 2026-04-09 00:40:04.977553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977565 | orchestrator | Thursday 09 April 2026 00:39:59 +0000 (0:00:00.319) 0:00:00.988 ******** 2026-04-09 00:40:04.977577 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977588 | orchestrator | 2026-04-09 00:40:04.977600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977611 | orchestrator | Thursday 09 April 2026 00:39:59 +0000 (0:00:00.361) 0:00:01.349 ******** 2026-04-09 00:40:04.977622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977634 | orchestrator | 2026-04-09 00:40:04.977645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977656 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.174) 0:00:01.524 ******** 2026-04-09 00:40:04.977673 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977685 | orchestrator | 2026-04-09 00:40:04.977696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977708 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.164) 0:00:01.689 ******** 2026-04-09 00:40:04.977720 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977732 | orchestrator | 2026-04-09 00:40:04.977744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977755 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.178) 0:00:01.867 ******** 2026-04-09 00:40:04.977767 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977778 | orchestrator | 2026-04-09 00:40:04.977790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977802 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.169) 0:00:02.037 ******** 2026-04-09 00:40:04.977813 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977824 | orchestrator | 2026-04-09 00:40:04.977836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977847 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.165) 0:00:02.203 ******** 2026-04-09 00:40:04.977859 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977870 | orchestrator | 2026-04-09 00:40:04.977881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977893 | orchestrator | Thursday 09 April 2026 00:40:00 +0000 (0:00:00.176) 0:00:02.380 ******** 2026-04-09 00:40:04.977904 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.977915 | orchestrator | 2026-04-09 00:40:04.977927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.977938 | orchestrator | Thursday 09 April 2026 00:40:01 +0000 (0:00:00.164) 0:00:02.545 ******** 2026-04-09 00:40:04.977950 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd) 2026-04-09 00:40:04.977962 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd) 2026-04-09 00:40:04.977974 | orchestrator | 2026-04-09 00:40:04.977985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.978015 | orchestrator | Thursday 09 April 2026 00:40:01 +0000 (0:00:00.360) 0:00:02.905 ******** 2026-04-09 00:40:04.978121 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b) 2026-04-09 00:40:04.978133 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b) 2026-04-09 00:40:04.978145 | orchestrator | 2026-04-09 00:40:04.978157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.978168 | orchestrator | Thursday 09 April 2026 00:40:01 +0000 (0:00:00.364) 0:00:03.270 ******** 2026-04-09 00:40:04.978204 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f) 2026-04-09 00:40:04.978234 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f) 2026-04-09 00:40:04.978246 | orchestrator | 2026-04-09 00:40:04.978258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.978307 | orchestrator | Thursday 09 April 2026 00:40:02 +0000 (0:00:00.505) 0:00:03.775 ******** 2026-04-09 00:40:04.978319 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c) 2026-04-09 00:40:04.978331 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c) 2026-04-09 00:40:04.978343 | orchestrator | 2026-04-09 00:40:04.978354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:04.978365 | orchestrator | Thursday 09 April 2026 00:40:02 +0000 (0:00:00.535) 0:00:04.311 ******** 2026-04-09 00:40:04.978377 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:40:04.978388 | orchestrator | 2026-04-09 00:40:04.978400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978411 | orchestrator | Thursday 09 April 2026 00:40:03 +0000 (0:00:00.566) 0:00:04.878 ******** 2026-04-09 00:40:04.978423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:40:04.978434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:40:04.978445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:40:04.978456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:40:04.978468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:40:04.978479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:40:04.978490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:40:04.978502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:40:04.978513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:40:04.978524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:40:04.978536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:40:04.978547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:40:04.978558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:40:04.978570 | orchestrator | 2026-04-09 00:40:04.978582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978593 | orchestrator | Thursday 09 April 2026 00:40:03 +0000 (0:00:00.330) 0:00:05.209 ******** 2026-04-09 00:40:04.978604 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978616 | orchestrator | 2026-04-09 00:40:04.978627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978638 | orchestrator | Thursday 09 April 2026 00:40:03 +0000 (0:00:00.185) 0:00:05.394 ******** 2026-04-09 00:40:04.978650 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978661 | orchestrator | 2026-04-09 00:40:04.978672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978684 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.184) 0:00:05.578 ******** 2026-04-09 00:40:04.978695 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978706 | orchestrator | 2026-04-09 00:40:04.978718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978736 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.158) 0:00:05.737 ******** 2026-04-09 00:40:04.978747 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978759 | orchestrator | 2026-04-09 00:40:04.978770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978781 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.169) 0:00:05.906 ******** 2026-04-09 00:40:04.978793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978804 | orchestrator | 2026-04-09 00:40:04.978815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978827 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.169) 0:00:06.075 ******** 2026-04-09 00:40:04.978838 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978850 | orchestrator | 2026-04-09 00:40:04.978861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:04.978872 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.202) 0:00:06.278 ******** 2026-04-09 00:40:04.978884 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:04.978895 | orchestrator | 2026-04-09 00:40:04.978915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.755750 | orchestrator | Thursday 09 April 2026 00:40:04 +0000 (0:00:00.166) 0:00:06.445 ******** 2026-04-09 00:40:11.755831 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.755845 | orchestrator | 2026-04-09 00:40:11.755857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.755868 | orchestrator | Thursday 09 April 2026 00:40:05 +0000 (0:00:00.171) 0:00:06.616 ******** 2026-04-09 00:40:11.755878 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:40:11.755888 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:40:11.755899 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:40:11.755909 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:40:11.755919 | orchestrator | 2026-04-09 00:40:11.755929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.755940 | orchestrator | Thursday 09 April 2026 00:40:06 +0000 (0:00:00.952) 0:00:07.568 ******** 2026-04-09 00:40:11.755950 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.755960 | orchestrator | 2026-04-09 00:40:11.755970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.755980 | orchestrator | Thursday 09 April 2026 00:40:06 +0000 (0:00:00.197) 0:00:07.766 ******** 2026-04-09 00:40:11.755991 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756001 | orchestrator | 2026-04-09 00:40:11.756027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.756038 | orchestrator | Thursday 09 April 2026 00:40:06 +0000 (0:00:00.204) 0:00:07.970 ******** 2026-04-09 00:40:11.756048 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756058 | orchestrator | 2026-04-09 00:40:11.756068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:11.756078 | orchestrator | Thursday 09 April 2026 00:40:06 +0000 (0:00:00.192) 0:00:08.162 ******** 2026-04-09 00:40:11.756088 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756098 | orchestrator | 2026-04-09 00:40:11.756109 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:40:11.756119 | orchestrator | Thursday 09 April 2026 00:40:06 +0000 (0:00:00.187) 0:00:08.350 ******** 2026-04-09 00:40:11.756129 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:40:11.756139 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:40:11.756149 | orchestrator | 2026-04-09 00:40:11.756159 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:40:11.756169 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.162) 0:00:08.512 ******** 2026-04-09 00:40:11.756179 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756207 | orchestrator | 2026-04-09 00:40:11.756217 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:40:11.756227 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.112) 0:00:08.624 ******** 2026-04-09 00:40:11.756237 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756247 | orchestrator | 2026-04-09 00:40:11.756257 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:40:11.756317 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.163) 0:00:08.788 ******** 2026-04-09 00:40:11.756330 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756345 | orchestrator | 2026-04-09 00:40:11.756363 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:40:11.756379 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.142) 0:00:08.930 ******** 2026-04-09 00:40:11.756397 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:11.756414 | orchestrator | 2026-04-09 00:40:11.756431 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:40:11.756448 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.152) 0:00:09.082 ******** 2026-04-09 00:40:11.756464 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7170513-cc74-5c6a-bf20-0648bd8fe211'}}) 2026-04-09 00:40:11.756482 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b054f04d-2068-53f2-80e7-c9a997d8c167'}}) 2026-04-09 00:40:11.756500 | orchestrator | 2026-04-09 00:40:11.756517 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:40:11.756528 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.168) 0:00:09.251 ******** 2026-04-09 00:40:11.756538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7170513-cc74-5c6a-bf20-0648bd8fe211'}})  2026-04-09 00:40:11.756555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b054f04d-2068-53f2-80e7-c9a997d8c167'}})  2026-04-09 00:40:11.756566 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756576 | orchestrator | 2026-04-09 00:40:11.756586 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:40:11.756596 | orchestrator | Thursday 09 April 2026 00:40:07 +0000 (0:00:00.130) 0:00:09.382 ******** 2026-04-09 00:40:11.756606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7170513-cc74-5c6a-bf20-0648bd8fe211'}})  2026-04-09 00:40:11.756622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b054f04d-2068-53f2-80e7-c9a997d8c167'}})  2026-04-09 00:40:11.756632 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756642 | orchestrator | 2026-04-09 00:40:11.756652 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:40:11.756662 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.162) 0:00:09.544 ******** 2026-04-09 00:40:11.756672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7170513-cc74-5c6a-bf20-0648bd8fe211'}})  2026-04-09 00:40:11.756698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b054f04d-2068-53f2-80e7-c9a997d8c167'}})  2026-04-09 00:40:11.756709 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756719 | orchestrator | 2026-04-09 00:40:11.756729 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:40:11.756739 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.291) 0:00:09.836 ******** 2026-04-09 00:40:11.756749 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:11.756759 | orchestrator | 2026-04-09 00:40:11.756769 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:40:11.756779 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.115) 0:00:09.951 ******** 2026-04-09 00:40:11.756789 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:40:11.756799 | orchestrator | 2026-04-09 00:40:11.756809 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:40:11.756827 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.114) 0:00:10.065 ******** 2026-04-09 00:40:11.756837 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756847 | orchestrator | 2026-04-09 00:40:11.756857 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:40:11.756867 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.106) 0:00:10.172 ******** 2026-04-09 00:40:11.756877 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756887 | orchestrator | 2026-04-09 00:40:11.756897 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:40:11.756907 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.097) 0:00:10.269 ******** 2026-04-09 00:40:11.756917 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.756927 | orchestrator | 2026-04-09 00:40:11.756937 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:40:11.756947 | orchestrator | Thursday 09 April 2026 00:40:08 +0000 (0:00:00.107) 0:00:10.377 ******** 2026-04-09 00:40:11.756957 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:40:11.756967 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:11.756977 | orchestrator |  "sdb": { 2026-04-09 00:40:11.756987 | orchestrator |  "osd_lvm_uuid": "a7170513-cc74-5c6a-bf20-0648bd8fe211" 2026-04-09 00:40:11.756997 | orchestrator |  }, 2026-04-09 00:40:11.757007 | orchestrator |  "sdc": { 2026-04-09 00:40:11.757017 | orchestrator |  "osd_lvm_uuid": "b054f04d-2068-53f2-80e7-c9a997d8c167" 2026-04-09 00:40:11.757027 | orchestrator |  } 2026-04-09 00:40:11.757037 | orchestrator |  } 2026-04-09 00:40:11.757047 | orchestrator | } 2026-04-09 00:40:11.757057 | orchestrator | 2026-04-09 00:40:11.757067 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:40:11.757077 | orchestrator | Thursday 09 April 2026 00:40:09 +0000 (0:00:00.107) 0:00:10.484 ******** 2026-04-09 00:40:11.757088 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.757098 | orchestrator | 2026-04-09 00:40:11.757108 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:40:11.757117 | orchestrator | Thursday 09 April 2026 00:40:09 +0000 (0:00:00.096) 0:00:10.581 ******** 2026-04-09 00:40:11.757128 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.757138 | orchestrator | 2026-04-09 00:40:11.757147 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:40:11.757157 | orchestrator | Thursday 09 April 2026 00:40:09 +0000 (0:00:00.095) 0:00:10.676 ******** 2026-04-09 00:40:11.757167 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:40:11.757177 | orchestrator | 2026-04-09 00:40:11.757187 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:40:11.757197 | orchestrator | Thursday 09 April 2026 00:40:09 +0000 (0:00:00.104) 0:00:10.781 ******** 2026-04-09 00:40:11.757207 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:40:11.757217 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:40:11.757227 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:11.757238 | orchestrator |  "sdb": { 2026-04-09 00:40:11.757252 | orchestrator |  "osd_lvm_uuid": "a7170513-cc74-5c6a-bf20-0648bd8fe211" 2026-04-09 00:40:11.757262 | orchestrator |  }, 2026-04-09 00:40:11.757318 | orchestrator |  "sdc": { 2026-04-09 00:40:11.757329 | orchestrator |  "osd_lvm_uuid": "b054f04d-2068-53f2-80e7-c9a997d8c167" 2026-04-09 00:40:11.757339 | orchestrator |  } 2026-04-09 00:40:11.757350 | orchestrator |  }, 2026-04-09 00:40:11.757361 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:40:11.757372 | orchestrator |  { 2026-04-09 00:40:11.757383 | orchestrator |  "data": "osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211", 2026-04-09 00:40:11.757394 | orchestrator |  "data_vg": "ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211" 2026-04-09 00:40:11.757405 | orchestrator |  }, 2026-04-09 00:40:11.757427 | orchestrator |  { 2026-04-09 00:40:11.757438 | orchestrator |  "data": "osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167", 2026-04-09 00:40:11.757449 | orchestrator |  "data_vg": "ceph-b054f04d-2068-53f2-80e7-c9a997d8c167" 2026-04-09 00:40:11.757460 | orchestrator |  } 2026-04-09 00:40:11.757471 | orchestrator |  ] 2026-04-09 00:40:11.757485 | orchestrator |  } 2026-04-09 00:40:11.757502 | orchestrator | } 2026-04-09 00:40:11.757518 | orchestrator | 2026-04-09 00:40:11.757536 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:40:11.757553 | orchestrator | Thursday 09 April 2026 00:40:09 +0000 (0:00:00.174) 0:00:10.955 ******** 2026-04-09 00:40:11.757568 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:11.757579 | orchestrator | 2026-04-09 00:40:11.757590 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:40:11.757600 | orchestrator | 2026-04-09 00:40:11.757610 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:40:11.757620 | orchestrator | Thursday 09 April 2026 00:40:11 +0000 (0:00:01.853) 0:00:12.809 ******** 2026-04-09 00:40:11.757630 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:11.757641 | orchestrator | 2026-04-09 00:40:11.757651 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:40:11.757661 | orchestrator | Thursday 09 April 2026 00:40:11 +0000 (0:00:00.213) 0:00:13.023 ******** 2026-04-09 00:40:11.757671 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:11.757681 | orchestrator | 2026-04-09 00:40:11.757698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.272601 | orchestrator | Thursday 09 April 2026 00:40:11 +0000 (0:00:00.204) 0:00:13.227 ******** 2026-04-09 00:40:18.272703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:40:18.272719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:40:18.272732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:40:18.272744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:40:18.272755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:40:18.272766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:40:18.272777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:40:18.272788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:40:18.272805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:40:18.272817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:40:18.272828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:40:18.272840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:40:18.272851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:40:18.272863 | orchestrator | 2026-04-09 00:40:18.272875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.272886 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.315) 0:00:13.542 ******** 2026-04-09 00:40:18.272898 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.272910 | orchestrator | 2026-04-09 00:40:18.272922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.272934 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.175) 0:00:13.718 ******** 2026-04-09 00:40:18.272945 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.272985 | orchestrator | 2026-04-09 00:40:18.273003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273049 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.175) 0:00:13.893 ******** 2026-04-09 00:40:18.273079 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273097 | orchestrator | 2026-04-09 00:40:18.273116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273135 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.166) 0:00:14.060 ******** 2026-04-09 00:40:18.273156 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273176 | orchestrator | 2026-04-09 00:40:18.273196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273216 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.179) 0:00:14.239 ******** 2026-04-09 00:40:18.273237 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273257 | orchestrator | 2026-04-09 00:40:18.273305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273319 | orchestrator | Thursday 09 April 2026 00:40:12 +0000 (0:00:00.174) 0:00:14.413 ******** 2026-04-09 00:40:18.273332 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273346 | orchestrator | 2026-04-09 00:40:18.273358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273370 | orchestrator | Thursday 09 April 2026 00:40:13 +0000 (0:00:00.420) 0:00:14.834 ******** 2026-04-09 00:40:18.273382 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273393 | orchestrator | 2026-04-09 00:40:18.273404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273416 | orchestrator | Thursday 09 April 2026 00:40:13 +0000 (0:00:00.181) 0:00:15.015 ******** 2026-04-09 00:40:18.273428 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273439 | orchestrator | 2026-04-09 00:40:18.273451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273462 | orchestrator | Thursday 09 April 2026 00:40:13 +0000 (0:00:00.172) 0:00:15.187 ******** 2026-04-09 00:40:18.273474 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc) 2026-04-09 00:40:18.273487 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc) 2026-04-09 00:40:18.273498 | orchestrator | 2026-04-09 00:40:18.273509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273521 | orchestrator | Thursday 09 April 2026 00:40:14 +0000 (0:00:00.363) 0:00:15.551 ******** 2026-04-09 00:40:18.273532 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b) 2026-04-09 00:40:18.273544 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b) 2026-04-09 00:40:18.273555 | orchestrator | 2026-04-09 00:40:18.273567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273578 | orchestrator | Thursday 09 April 2026 00:40:14 +0000 (0:00:00.368) 0:00:15.920 ******** 2026-04-09 00:40:18.273590 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961) 2026-04-09 00:40:18.273601 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961) 2026-04-09 00:40:18.273612 | orchestrator | 2026-04-09 00:40:18.273624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273655 | orchestrator | Thursday 09 April 2026 00:40:14 +0000 (0:00:00.371) 0:00:16.291 ******** 2026-04-09 00:40:18.273667 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa) 2026-04-09 00:40:18.273679 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa) 2026-04-09 00:40:18.273691 | orchestrator | 2026-04-09 00:40:18.273702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:18.273726 | orchestrator | Thursday 09 April 2026 00:40:15 +0000 (0:00:00.374) 0:00:16.666 ******** 2026-04-09 00:40:18.273737 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:40:18.273748 | orchestrator | 2026-04-09 00:40:18.273760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.273772 | orchestrator | Thursday 09 April 2026 00:40:15 +0000 (0:00:00.279) 0:00:16.947 ******** 2026-04-09 00:40:18.273783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:40:18.273794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:40:18.273805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:40:18.273817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:40:18.273828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:40:18.273840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:40:18.273851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:40:18.273862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:40:18.273880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:40:18.273892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:40:18.273903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:40:18.273915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:40:18.273926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:40:18.273938 | orchestrator | 2026-04-09 00:40:18.273949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.273960 | orchestrator | Thursday 09 April 2026 00:40:15 +0000 (0:00:00.325) 0:00:17.272 ******** 2026-04-09 00:40:18.273972 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.273983 | orchestrator | 2026-04-09 00:40:18.273995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274006 | orchestrator | Thursday 09 April 2026 00:40:15 +0000 (0:00:00.181) 0:00:17.454 ******** 2026-04-09 00:40:18.274069 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274082 | orchestrator | 2026-04-09 00:40:18.274094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274106 | orchestrator | Thursday 09 April 2026 00:40:16 +0000 (0:00:00.437) 0:00:17.892 ******** 2026-04-09 00:40:18.274129 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274151 | orchestrator | 2026-04-09 00:40:18.274163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274174 | orchestrator | Thursday 09 April 2026 00:40:16 +0000 (0:00:00.182) 0:00:18.075 ******** 2026-04-09 00:40:18.274186 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274197 | orchestrator | 2026-04-09 00:40:18.274208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274220 | orchestrator | Thursday 09 April 2026 00:40:16 +0000 (0:00:00.189) 0:00:18.264 ******** 2026-04-09 00:40:18.274231 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274242 | orchestrator | 2026-04-09 00:40:18.274254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274265 | orchestrator | Thursday 09 April 2026 00:40:16 +0000 (0:00:00.197) 0:00:18.461 ******** 2026-04-09 00:40:18.274294 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274306 | orchestrator | 2026-04-09 00:40:18.274317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274335 | orchestrator | Thursday 09 April 2026 00:40:17 +0000 (0:00:00.186) 0:00:18.648 ******** 2026-04-09 00:40:18.274347 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274358 | orchestrator | 2026-04-09 00:40:18.274369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274381 | orchestrator | Thursday 09 April 2026 00:40:17 +0000 (0:00:00.184) 0:00:18.832 ******** 2026-04-09 00:40:18.274392 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:18.274404 | orchestrator | 2026-04-09 00:40:18.274415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274426 | orchestrator | Thursday 09 April 2026 00:40:17 +0000 (0:00:00.184) 0:00:19.017 ******** 2026-04-09 00:40:18.274438 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:40:18.274450 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:40:18.274462 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:40:18.274474 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:40:18.274485 | orchestrator | 2026-04-09 00:40:18.274497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:18.274508 | orchestrator | Thursday 09 April 2026 00:40:18 +0000 (0:00:00.617) 0:00:19.634 ******** 2026-04-09 00:40:18.274520 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033182 | orchestrator | 2026-04-09 00:40:24.033366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:24.033384 | orchestrator | Thursday 09 April 2026 00:40:18 +0000 (0:00:00.194) 0:00:19.828 ******** 2026-04-09 00:40:24.033395 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033406 | orchestrator | 2026-04-09 00:40:24.033416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:24.033426 | orchestrator | Thursday 09 April 2026 00:40:18 +0000 (0:00:00.169) 0:00:19.998 ******** 2026-04-09 00:40:24.033436 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033446 | orchestrator | 2026-04-09 00:40:24.033456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:24.033466 | orchestrator | Thursday 09 April 2026 00:40:18 +0000 (0:00:00.166) 0:00:20.164 ******** 2026-04-09 00:40:24.033476 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033485 | orchestrator | 2026-04-09 00:40:24.033495 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:40:24.033505 | orchestrator | Thursday 09 April 2026 00:40:18 +0000 (0:00:00.168) 0:00:20.333 ******** 2026-04-09 00:40:24.033526 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:40:24.033536 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:40:24.033545 | orchestrator | 2026-04-09 00:40:24.033556 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:40:24.033566 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.312) 0:00:20.645 ******** 2026-04-09 00:40:24.033575 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033585 | orchestrator | 2026-04-09 00:40:24.033595 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:40:24.033604 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.135) 0:00:20.781 ******** 2026-04-09 00:40:24.033614 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033624 | orchestrator | 2026-04-09 00:40:24.033635 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:40:24.033645 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.120) 0:00:20.901 ******** 2026-04-09 00:40:24.033655 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033666 | orchestrator | 2026-04-09 00:40:24.033676 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:40:24.033686 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.125) 0:00:21.027 ******** 2026-04-09 00:40:24.033717 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:24.033752 | orchestrator | 2026-04-09 00:40:24.033764 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:40:24.033775 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.131) 0:00:21.158 ******** 2026-04-09 00:40:24.033786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}}) 2026-04-09 00:40:24.033798 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c145dd89-b6cf-5d58-ae96-f0c6197297d1'}}) 2026-04-09 00:40:24.033809 | orchestrator | 2026-04-09 00:40:24.033820 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:40:24.033831 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.160) 0:00:21.319 ******** 2026-04-09 00:40:24.033843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}})  2026-04-09 00:40:24.033855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c145dd89-b6cf-5d58-ae96-f0c6197297d1'}})  2026-04-09 00:40:24.033866 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033877 | orchestrator | 2026-04-09 00:40:24.033887 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:40:24.033898 | orchestrator | Thursday 09 April 2026 00:40:19 +0000 (0:00:00.147) 0:00:21.467 ******** 2026-04-09 00:40:24.033909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}})  2026-04-09 00:40:24.033920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c145dd89-b6cf-5d58-ae96-f0c6197297d1'}})  2026-04-09 00:40:24.033931 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.033942 | orchestrator | 2026-04-09 00:40:24.033952 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:40:24.033961 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.148) 0:00:21.615 ******** 2026-04-09 00:40:24.033971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}})  2026-04-09 00:40:24.033982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c145dd89-b6cf-5d58-ae96-f0c6197297d1'}})  2026-04-09 00:40:24.033992 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034002 | orchestrator | 2026-04-09 00:40:24.034013 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:40:24.034071 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.141) 0:00:21.757 ******** 2026-04-09 00:40:24.034082 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:24.034092 | orchestrator | 2026-04-09 00:40:24.034103 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:40:24.034113 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.111) 0:00:21.869 ******** 2026-04-09 00:40:24.034123 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:40:24.034133 | orchestrator | 2026-04-09 00:40:24.034142 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:40:24.034152 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.127) 0:00:21.996 ******** 2026-04-09 00:40:24.034178 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034190 | orchestrator | 2026-04-09 00:40:24.034201 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:40:24.034211 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.118) 0:00:22.115 ******** 2026-04-09 00:40:24.034220 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034230 | orchestrator | 2026-04-09 00:40:24.034240 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:40:24.034249 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.225) 0:00:22.341 ******** 2026-04-09 00:40:24.034260 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034288 | orchestrator | 2026-04-09 00:40:24.034299 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:40:24.034316 | orchestrator | Thursday 09 April 2026 00:40:20 +0000 (0:00:00.102) 0:00:22.443 ******** 2026-04-09 00:40:24.034326 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:40:24.034336 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:24.034347 | orchestrator |  "sdb": { 2026-04-09 00:40:24.034358 | orchestrator |  "osd_lvm_uuid": "bd7ebef9-c50f-5d78-8aca-8eab443ce24e" 2026-04-09 00:40:24.034369 | orchestrator |  }, 2026-04-09 00:40:24.034379 | orchestrator |  "sdc": { 2026-04-09 00:40:24.034389 | orchestrator |  "osd_lvm_uuid": "c145dd89-b6cf-5d58-ae96-f0c6197297d1" 2026-04-09 00:40:24.034399 | orchestrator |  } 2026-04-09 00:40:24.034409 | orchestrator |  } 2026-04-09 00:40:24.034419 | orchestrator | } 2026-04-09 00:40:24.034428 | orchestrator | 2026-04-09 00:40:24.034439 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:40:24.034449 | orchestrator | Thursday 09 April 2026 00:40:21 +0000 (0:00:00.138) 0:00:22.581 ******** 2026-04-09 00:40:24.034460 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034470 | orchestrator | 2026-04-09 00:40:24.034482 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:40:24.034492 | orchestrator | Thursday 09 April 2026 00:40:21 +0000 (0:00:00.133) 0:00:22.714 ******** 2026-04-09 00:40:24.034501 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034511 | orchestrator | 2026-04-09 00:40:24.034520 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:40:24.034530 | orchestrator | Thursday 09 April 2026 00:40:21 +0000 (0:00:00.127) 0:00:22.841 ******** 2026-04-09 00:40:24.034541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:40:24.034552 | orchestrator | 2026-04-09 00:40:24.034562 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:40:24.034572 | orchestrator | Thursday 09 April 2026 00:40:21 +0000 (0:00:00.139) 0:00:22.981 ******** 2026-04-09 00:40:24.034581 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:40:24.034591 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:40:24.034601 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:24.034611 | orchestrator |  "sdb": { 2026-04-09 00:40:24.034620 | orchestrator |  "osd_lvm_uuid": "bd7ebef9-c50f-5d78-8aca-8eab443ce24e" 2026-04-09 00:40:24.034630 | orchestrator |  }, 2026-04-09 00:40:24.034640 | orchestrator |  "sdc": { 2026-04-09 00:40:24.034650 | orchestrator |  "osd_lvm_uuid": "c145dd89-b6cf-5d58-ae96-f0c6197297d1" 2026-04-09 00:40:24.034659 | orchestrator |  } 2026-04-09 00:40:24.034669 | orchestrator |  }, 2026-04-09 00:40:24.034678 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:40:24.034688 | orchestrator |  { 2026-04-09 00:40:24.034698 | orchestrator |  "data": "osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e", 2026-04-09 00:40:24.034707 | orchestrator |  "data_vg": "ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e" 2026-04-09 00:40:24.034717 | orchestrator |  }, 2026-04-09 00:40:24.034732 | orchestrator |  { 2026-04-09 00:40:24.034742 | orchestrator |  "data": "osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1", 2026-04-09 00:40:24.034752 | orchestrator |  "data_vg": "ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1" 2026-04-09 00:40:24.034761 | orchestrator |  } 2026-04-09 00:40:24.034771 | orchestrator |  ] 2026-04-09 00:40:24.034781 | orchestrator |  } 2026-04-09 00:40:24.034790 | orchestrator | } 2026-04-09 00:40:24.034800 | orchestrator | 2026-04-09 00:40:24.034810 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:40:24.034819 | orchestrator | Thursday 09 April 2026 00:40:21 +0000 (0:00:00.192) 0:00:23.174 ******** 2026-04-09 00:40:24.034829 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:24.034839 | orchestrator | 2026-04-09 00:40:24.034849 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-09 00:40:24.034864 | orchestrator | 2026-04-09 00:40:24.034874 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:40:24.034884 | orchestrator | Thursday 09 April 2026 00:40:22 +0000 (0:00:01.064) 0:00:24.238 ******** 2026-04-09 00:40:24.034893 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:24.034903 | orchestrator | 2026-04-09 00:40:24.034913 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:40:24.034922 | orchestrator | Thursday 09 April 2026 00:40:23 +0000 (0:00:00.458) 0:00:24.697 ******** 2026-04-09 00:40:24.034932 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:24.034942 | orchestrator | 2026-04-09 00:40:24.034952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:24.034961 | orchestrator | Thursday 09 April 2026 00:40:23 +0000 (0:00:00.554) 0:00:25.251 ******** 2026-04-09 00:40:24.034971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:40:24.034981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:40:24.034990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:40:24.035000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:40:24.035009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:40:24.035025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:40:31.208488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:40:31.208594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:40:31.208610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:40:31.208622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:40:31.208634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:40:31.208646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:40:31.208657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:40:31.208669 | orchestrator | 2026-04-09 00:40:31.208681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208694 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.325) 0:00:25.577 ******** 2026-04-09 00:40:31.208705 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208718 | orchestrator | 2026-04-09 00:40:31.208730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208741 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.179) 0:00:25.756 ******** 2026-04-09 00:40:31.208752 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208764 | orchestrator | 2026-04-09 00:40:31.208775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208786 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.176) 0:00:25.932 ******** 2026-04-09 00:40:31.208798 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208809 | orchestrator | 2026-04-09 00:40:31.208820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208831 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.181) 0:00:26.113 ******** 2026-04-09 00:40:31.208843 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208854 | orchestrator | 2026-04-09 00:40:31.208865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208877 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.178) 0:00:26.292 ******** 2026-04-09 00:40:31.208888 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208926 | orchestrator | 2026-04-09 00:40:31.208938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208949 | orchestrator | Thursday 09 April 2026 00:40:24 +0000 (0:00:00.162) 0:00:26.455 ******** 2026-04-09 00:40:31.208961 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.208972 | orchestrator | 2026-04-09 00:40:31.208983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.208994 | orchestrator | Thursday 09 April 2026 00:40:25 +0000 (0:00:00.167) 0:00:26.623 ******** 2026-04-09 00:40:31.209006 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209018 | orchestrator | 2026-04-09 00:40:31.209032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209046 | orchestrator | Thursday 09 April 2026 00:40:25 +0000 (0:00:00.172) 0:00:26.796 ******** 2026-04-09 00:40:31.209059 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209071 | orchestrator | 2026-04-09 00:40:31.209084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209098 | orchestrator | Thursday 09 April 2026 00:40:25 +0000 (0:00:00.162) 0:00:26.958 ******** 2026-04-09 00:40:31.209111 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961) 2026-04-09 00:40:31.209124 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961) 2026-04-09 00:40:31.209137 | orchestrator | 2026-04-09 00:40:31.209150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209163 | orchestrator | Thursday 09 April 2026 00:40:25 +0000 (0:00:00.504) 0:00:27.463 ******** 2026-04-09 00:40:31.209176 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554) 2026-04-09 00:40:31.209190 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554) 2026-04-09 00:40:31.209202 | orchestrator | 2026-04-09 00:40:31.209215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209228 | orchestrator | Thursday 09 April 2026 00:40:26 +0000 (0:00:00.636) 0:00:28.099 ******** 2026-04-09 00:40:31.209241 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02) 2026-04-09 00:40:31.209255 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02) 2026-04-09 00:40:31.209268 | orchestrator | 2026-04-09 00:40:31.209328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209342 | orchestrator | Thursday 09 April 2026 00:40:26 +0000 (0:00:00.372) 0:00:28.472 ******** 2026-04-09 00:40:31.209356 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6) 2026-04-09 00:40:31.209386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6) 2026-04-09 00:40:31.209399 | orchestrator | 2026-04-09 00:40:31.209410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:40:31.209421 | orchestrator | Thursday 09 April 2026 00:40:27 +0000 (0:00:00.380) 0:00:28.852 ******** 2026-04-09 00:40:31.209433 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:40:31.209444 | orchestrator | 2026-04-09 00:40:31.209455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209486 | orchestrator | Thursday 09 April 2026 00:40:27 +0000 (0:00:00.295) 0:00:29.148 ******** 2026-04-09 00:40:31.209498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:40:31.209509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:40:31.209521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:40:31.209533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:40:31.209553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:40:31.209564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:40:31.209576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:40:31.209587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:40:31.209599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:40:31.209610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:40:31.209621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:40:31.209632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:40:31.209644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:40:31.209656 | orchestrator | 2026-04-09 00:40:31.209667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209678 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.336) 0:00:29.484 ******** 2026-04-09 00:40:31.209695 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209707 | orchestrator | 2026-04-09 00:40:31.209718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209730 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.177) 0:00:29.662 ******** 2026-04-09 00:40:31.209741 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209753 | orchestrator | 2026-04-09 00:40:31.209764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209776 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.182) 0:00:29.844 ******** 2026-04-09 00:40:31.209787 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209799 | orchestrator | 2026-04-09 00:40:31.209810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209822 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.176) 0:00:30.021 ******** 2026-04-09 00:40:31.209833 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209845 | orchestrator | 2026-04-09 00:40:31.209856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209868 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.169) 0:00:30.191 ******** 2026-04-09 00:40:31.209879 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209891 | orchestrator | 2026-04-09 00:40:31.209902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209914 | orchestrator | Thursday 09 April 2026 00:40:28 +0000 (0:00:00.176) 0:00:30.367 ******** 2026-04-09 00:40:31.209925 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209937 | orchestrator | 2026-04-09 00:40:31.209948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.209959 | orchestrator | Thursday 09 April 2026 00:40:29 +0000 (0:00:00.454) 0:00:30.822 ******** 2026-04-09 00:40:31.209971 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.209982 | orchestrator | 2026-04-09 00:40:31.209994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210005 | orchestrator | Thursday 09 April 2026 00:40:29 +0000 (0:00:00.240) 0:00:31.062 ******** 2026-04-09 00:40:31.210076 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.210090 | orchestrator | 2026-04-09 00:40:31.210102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210113 | orchestrator | Thursday 09 April 2026 00:40:29 +0000 (0:00:00.185) 0:00:31.248 ******** 2026-04-09 00:40:31.210125 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:40:31.210137 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:40:31.210156 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:40:31.210168 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:40:31.210180 | orchestrator | 2026-04-09 00:40:31.210192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210204 | orchestrator | Thursday 09 April 2026 00:40:30 +0000 (0:00:00.605) 0:00:31.853 ******** 2026-04-09 00:40:31.210215 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.210227 | orchestrator | 2026-04-09 00:40:31.210239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210251 | orchestrator | Thursday 09 April 2026 00:40:30 +0000 (0:00:00.204) 0:00:32.058 ******** 2026-04-09 00:40:31.210262 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.210305 | orchestrator | 2026-04-09 00:40:31.210318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210330 | orchestrator | Thursday 09 April 2026 00:40:30 +0000 (0:00:00.194) 0:00:32.253 ******** 2026-04-09 00:40:31.210341 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.210353 | orchestrator | 2026-04-09 00:40:31.210365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:40:31.210376 | orchestrator | Thursday 09 April 2026 00:40:30 +0000 (0:00:00.194) 0:00:32.447 ******** 2026-04-09 00:40:31.210388 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:31.210399 | orchestrator | 2026-04-09 00:40:31.210419 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-09 00:40:35.391623 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:00.233) 0:00:32.680 ******** 2026-04-09 00:40:35.391698 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-09 00:40:35.391705 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-09 00:40:35.391710 | orchestrator | 2026-04-09 00:40:35.391714 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-09 00:40:35.391719 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:00.178) 0:00:32.859 ******** 2026-04-09 00:40:35.391724 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391728 | orchestrator | 2026-04-09 00:40:35.391733 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-09 00:40:35.391737 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:00.120) 0:00:32.980 ******** 2026-04-09 00:40:35.391741 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391745 | orchestrator | 2026-04-09 00:40:35.391749 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-09 00:40:35.391753 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:00.131) 0:00:33.112 ******** 2026-04-09 00:40:35.391757 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391761 | orchestrator | 2026-04-09 00:40:35.391764 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-09 00:40:35.391769 | orchestrator | Thursday 09 April 2026 00:40:31 +0000 (0:00:00.129) 0:00:33.241 ******** 2026-04-09 00:40:35.391773 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:35.391778 | orchestrator | 2026-04-09 00:40:35.391782 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-09 00:40:35.391786 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.321) 0:00:33.562 ******** 2026-04-09 00:40:35.391790 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e1b9ff7a-7324-53df-902d-27a5c0e1e380'}}) 2026-04-09 00:40:35.391795 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}}) 2026-04-09 00:40:35.391799 | orchestrator | 2026-04-09 00:40:35.391803 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-09 00:40:35.391821 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.192) 0:00:33.755 ******** 2026-04-09 00:40:35.391825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e1b9ff7a-7324-53df-902d-27a5c0e1e380'}})  2026-04-09 00:40:35.391844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}})  2026-04-09 00:40:35.391848 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391852 | orchestrator | 2026-04-09 00:40:35.391856 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-09 00:40:35.391860 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.157) 0:00:33.912 ******** 2026-04-09 00:40:35.391864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e1b9ff7a-7324-53df-902d-27a5c0e1e380'}})  2026-04-09 00:40:35.391868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}})  2026-04-09 00:40:35.391873 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391877 | orchestrator | 2026-04-09 00:40:35.391881 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-09 00:40:35.391885 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.161) 0:00:34.074 ******** 2026-04-09 00:40:35.391889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e1b9ff7a-7324-53df-902d-27a5c0e1e380'}})  2026-04-09 00:40:35.391893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}})  2026-04-09 00:40:35.391897 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391901 | orchestrator | 2026-04-09 00:40:35.391905 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-09 00:40:35.391909 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.148) 0:00:34.222 ******** 2026-04-09 00:40:35.391913 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:35.391917 | orchestrator | 2026-04-09 00:40:35.391921 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-09 00:40:35.391925 | orchestrator | Thursday 09 April 2026 00:40:32 +0000 (0:00:00.139) 0:00:34.362 ******** 2026-04-09 00:40:35.391929 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:40:35.391933 | orchestrator | 2026-04-09 00:40:35.391937 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-09 00:40:35.391941 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.139) 0:00:34.501 ******** 2026-04-09 00:40:35.391945 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391949 | orchestrator | 2026-04-09 00:40:35.391953 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-09 00:40:35.391957 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.122) 0:00:34.623 ******** 2026-04-09 00:40:35.391961 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391965 | orchestrator | 2026-04-09 00:40:35.391969 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-09 00:40:35.391973 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.130) 0:00:34.753 ******** 2026-04-09 00:40:35.391977 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.391981 | orchestrator | 2026-04-09 00:40:35.391984 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-09 00:40:35.391988 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.127) 0:00:34.881 ******** 2026-04-09 00:40:35.391993 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:40:35.391997 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:35.392001 | orchestrator |  "sdb": { 2026-04-09 00:40:35.392014 | orchestrator |  "osd_lvm_uuid": "e1b9ff7a-7324-53df-902d-27a5c0e1e380" 2026-04-09 00:40:35.392019 | orchestrator |  }, 2026-04-09 00:40:35.392023 | orchestrator |  "sdc": { 2026-04-09 00:40:35.392027 | orchestrator |  "osd_lvm_uuid": "c85b9e91-1f7c-51a1-92b9-1f1081da5c54" 2026-04-09 00:40:35.392032 | orchestrator |  } 2026-04-09 00:40:35.392036 | orchestrator |  } 2026-04-09 00:40:35.392040 | orchestrator | } 2026-04-09 00:40:35.392044 | orchestrator | 2026-04-09 00:40:35.392048 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-09 00:40:35.392056 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.134) 0:00:35.015 ******** 2026-04-09 00:40:35.392060 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.392064 | orchestrator | 2026-04-09 00:40:35.392068 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-09 00:40:35.392072 | orchestrator | Thursday 09 April 2026 00:40:33 +0000 (0:00:00.149) 0:00:35.165 ******** 2026-04-09 00:40:35.392076 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.392080 | orchestrator | 2026-04-09 00:40:35.392084 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-09 00:40:35.392088 | orchestrator | Thursday 09 April 2026 00:40:34 +0000 (0:00:00.331) 0:00:35.496 ******** 2026-04-09 00:40:35.392092 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:40:35.392096 | orchestrator | 2026-04-09 00:40:35.392100 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-09 00:40:35.392104 | orchestrator | Thursday 09 April 2026 00:40:34 +0000 (0:00:00.130) 0:00:35.627 ******** 2026-04-09 00:40:35.392108 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:40:35.392112 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-09 00:40:35.392116 | orchestrator |  "ceph_osd_devices": { 2026-04-09 00:40:35.392120 | orchestrator |  "sdb": { 2026-04-09 00:40:35.392124 | orchestrator |  "osd_lvm_uuid": "e1b9ff7a-7324-53df-902d-27a5c0e1e380" 2026-04-09 00:40:35.392128 | orchestrator |  }, 2026-04-09 00:40:35.392132 | orchestrator |  "sdc": { 2026-04-09 00:40:35.392136 | orchestrator |  "osd_lvm_uuid": "c85b9e91-1f7c-51a1-92b9-1f1081da5c54" 2026-04-09 00:40:35.392140 | orchestrator |  } 2026-04-09 00:40:35.392144 | orchestrator |  }, 2026-04-09 00:40:35.392147 | orchestrator |  "lvm_volumes": [ 2026-04-09 00:40:35.392152 | orchestrator |  { 2026-04-09 00:40:35.392156 | orchestrator |  "data": "osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380", 2026-04-09 00:40:35.392160 | orchestrator |  "data_vg": "ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380" 2026-04-09 00:40:35.392164 | orchestrator |  }, 2026-04-09 00:40:35.392168 | orchestrator |  { 2026-04-09 00:40:35.392174 | orchestrator |  "data": "osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54", 2026-04-09 00:40:35.392178 | orchestrator |  "data_vg": "ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54" 2026-04-09 00:40:35.392182 | orchestrator |  } 2026-04-09 00:40:35.392186 | orchestrator |  ] 2026-04-09 00:40:35.392190 | orchestrator |  } 2026-04-09 00:40:35.392194 | orchestrator | } 2026-04-09 00:40:35.392198 | orchestrator | 2026-04-09 00:40:35.392202 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-09 00:40:35.392207 | orchestrator | Thursday 09 April 2026 00:40:34 +0000 (0:00:00.199) 0:00:35.826 ******** 2026-04-09 00:40:35.392211 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:40:35.392216 | orchestrator | 2026-04-09 00:40:35.392221 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:40:35.392225 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:40:35.392231 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:40:35.392236 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-09 00:40:35.392241 | orchestrator | 2026-04-09 00:40:35.392245 | orchestrator | 2026-04-09 00:40:35.392250 | orchestrator | 2026-04-09 00:40:35.392254 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:40:35.392259 | orchestrator | Thursday 09 April 2026 00:40:35 +0000 (0:00:01.024) 0:00:36.850 ******** 2026-04-09 00:40:35.392264 | orchestrator | =============================================================================== 2026-04-09 00:40:35.392288 | orchestrator | Write configuration file ------------------------------------------------ 3.94s 2026-04-09 00:40:35.392293 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-09 00:40:35.392297 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-04-09 00:40:35.392302 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-04-09 00:40:35.392306 | orchestrator | Get initial list of available block devices ----------------------------- 0.95s 2026-04-09 00:40:35.392311 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2026-04-09 00:40:35.392316 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2026-04-09 00:40:35.392320 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-04-09 00:40:35.392325 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-09 00:40:35.392329 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-04-09 00:40:35.392334 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.60s 2026-04-09 00:40:35.392338 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.58s 2026-04-09 00:40:35.392343 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-09 00:40:35.392351 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2026-04-09 00:40:35.771379 | orchestrator | Print DB devices -------------------------------------------------------- 0.55s 2026-04-09 00:40:35.771448 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-09 00:40:35.771470 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.52s 2026-04-09 00:40:35.771474 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-04-09 00:40:35.771479 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-04-09 00:40:35.771483 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.47s 2026-04-09 00:40:57.366382 | orchestrator | 2026-04-09 00:40:57 | INFO  | Task 3110dd78-e68c-4566-a5e4-3013d9471beb (sync inventory) is running in background. Output coming soon. 2026-04-09 00:41:25.181877 | orchestrator | 2026-04-09 00:40:58 | INFO  | Starting group_vars file reorganization 2026-04-09 00:41:25.181991 | orchestrator | 2026-04-09 00:40:58 | INFO  | Moved 0 file(s) to their respective directories 2026-04-09 00:41:25.182008 | orchestrator | 2026-04-09 00:40:58 | INFO  | Group_vars file reorganization completed 2026-04-09 00:41:25.182084 | orchestrator | 2026-04-09 00:41:01 | INFO  | Starting variable preparation from inventory 2026-04-09 00:41:25.182097 | orchestrator | 2026-04-09 00:41:03 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-09 00:41:25.182109 | orchestrator | 2026-04-09 00:41:03 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-09 00:41:25.182139 | orchestrator | 2026-04-09 00:41:03 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-09 00:41:25.182151 | orchestrator | 2026-04-09 00:41:03 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-09 00:41:25.182163 | orchestrator | 2026-04-09 00:41:03 | INFO  | Variable preparation completed 2026-04-09 00:41:25.182175 | orchestrator | 2026-04-09 00:41:05 | INFO  | Starting inventory overwrite handling 2026-04-09 00:41:25.182186 | orchestrator | 2026-04-09 00:41:05 | INFO  | Handling group overwrites in 99-overwrite 2026-04-09 00:41:25.182197 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removing group frr:children from 60-generic 2026-04-09 00:41:25.182209 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-09 00:41:25.182242 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-09 00:41:25.182254 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-09 00:41:25.182266 | orchestrator | 2026-04-09 00:41:05 | INFO  | Handling group overwrites in 20-roles 2026-04-09 00:41:25.182277 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-09 00:41:25.182288 | orchestrator | 2026-04-09 00:41:05 | INFO  | Removed 5 group(s) in total 2026-04-09 00:41:25.182299 | orchestrator | 2026-04-09 00:41:05 | INFO  | Inventory overwrite handling completed 2026-04-09 00:41:25.182402 | orchestrator | 2026-04-09 00:41:06 | INFO  | Starting merge of inventory files 2026-04-09 00:41:25.182415 | orchestrator | 2026-04-09 00:41:06 | INFO  | Inventory files merged successfully 2026-04-09 00:41:25.182428 | orchestrator | 2026-04-09 00:41:10 | INFO  | Generating minified hosts file 2026-04-09 00:41:25.182441 | orchestrator | 2026-04-09 00:41:11 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-09 00:41:25.182455 | orchestrator | 2026-04-09 00:41:11 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-09 00:41:25.182468 | orchestrator | 2026-04-09 00:41:13 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-09 00:41:25.182480 | orchestrator | 2026-04-09 00:41:23 | INFO  | Successfully wrote ClusterShell configuration 2026-04-09 00:41:25.182495 | orchestrator | [master bd123b6] 2026-04-09-00-41 2026-04-09 00:41:25.182509 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-09 00:41:25.182522 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-09 00:41:25.182534 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-09 00:41:25.182545 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-09 00:41:26.450595 | orchestrator | 2026-04-09 00:41:26 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-09 00:41:26.515696 | orchestrator | 2026-04-09 00:41:26 | INFO  | Task 937353d5-d2de-4914-bd00-3a24e932d32f (ceph-create-lvm-devices) was prepared for execution. 2026-04-09 00:41:26.515800 | orchestrator | 2026-04-09 00:41:26 | INFO  | It takes a moment until task 937353d5-d2de-4914-bd00-3a24e932d32f (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-09 00:41:37.396159 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:41:37.396263 | orchestrator | 2.16.14 2026-04-09 00:41:37.396280 | orchestrator | 2026-04-09 00:41:37.396293 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:41:37.396306 | orchestrator | 2026-04-09 00:41:37.396367 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:41:37.396379 | orchestrator | Thursday 09 April 2026 00:41:30 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-04-09 00:41:37.396392 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-09 00:41:37.396404 | orchestrator | 2026-04-09 00:41:37.396415 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:41:37.396427 | orchestrator | Thursday 09 April 2026 00:41:31 +0000 (0:00:00.206) 0:00:00.449 ******** 2026-04-09 00:41:37.396438 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:37.396450 | orchestrator | 2026-04-09 00:41:37.396461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396473 | orchestrator | Thursday 09 April 2026 00:41:31 +0000 (0:00:00.186) 0:00:00.635 ******** 2026-04-09 00:41:37.396485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:41:37.396522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:41:37.396534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:41:37.396545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:41:37.396557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:41:37.396568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:41:37.396580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:41:37.396591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:41:37.396602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-09 00:41:37.396614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:41:37.396625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:41:37.396636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:41:37.396647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:41:37.396659 | orchestrator | 2026-04-09 00:41:37.396670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396681 | orchestrator | Thursday 09 April 2026 00:41:31 +0000 (0:00:00.352) 0:00:00.987 ******** 2026-04-09 00:41:37.396693 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396706 | orchestrator | 2026-04-09 00:41:37.396719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396732 | orchestrator | Thursday 09 April 2026 00:41:31 +0000 (0:00:00.370) 0:00:01.357 ******** 2026-04-09 00:41:37.396746 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396759 | orchestrator | 2026-04-09 00:41:37.396772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396785 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.167) 0:00:01.524 ******** 2026-04-09 00:41:37.396798 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396811 | orchestrator | 2026-04-09 00:41:37.396824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396837 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.165) 0:00:01.690 ******** 2026-04-09 00:41:37.396850 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396863 | orchestrator | 2026-04-09 00:41:37.396876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396888 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.158) 0:00:01.849 ******** 2026-04-09 00:41:37.396901 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396915 | orchestrator | 2026-04-09 00:41:37.396928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396940 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.166) 0:00:02.015 ******** 2026-04-09 00:41:37.396953 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.396965 | orchestrator | 2026-04-09 00:41:37.396979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.396992 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.174) 0:00:02.189 ******** 2026-04-09 00:41:37.397005 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397017 | orchestrator | 2026-04-09 00:41:37.397030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397043 | orchestrator | Thursday 09 April 2026 00:41:32 +0000 (0:00:00.164) 0:00:02.354 ******** 2026-04-09 00:41:37.397056 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397070 | orchestrator | 2026-04-09 00:41:37.397083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397104 | orchestrator | Thursday 09 April 2026 00:41:33 +0000 (0:00:00.169) 0:00:02.524 ******** 2026-04-09 00:41:37.397132 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd) 2026-04-09 00:41:37.397145 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd) 2026-04-09 00:41:37.397156 | orchestrator | 2026-04-09 00:41:37.397167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397195 | orchestrator | Thursday 09 April 2026 00:41:33 +0000 (0:00:00.366) 0:00:02.890 ******** 2026-04-09 00:41:37.397208 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b) 2026-04-09 00:41:37.397219 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b) 2026-04-09 00:41:37.397230 | orchestrator | 2026-04-09 00:41:37.397242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397253 | orchestrator | Thursday 09 April 2026 00:41:33 +0000 (0:00:00.376) 0:00:03.267 ******** 2026-04-09 00:41:37.397264 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f) 2026-04-09 00:41:37.397276 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f) 2026-04-09 00:41:37.397287 | orchestrator | 2026-04-09 00:41:37.397299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397310 | orchestrator | Thursday 09 April 2026 00:41:34 +0000 (0:00:00.530) 0:00:03.797 ******** 2026-04-09 00:41:37.397394 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c) 2026-04-09 00:41:37.397406 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c) 2026-04-09 00:41:37.397418 | orchestrator | 2026-04-09 00:41:37.397429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:37.397440 | orchestrator | Thursday 09 April 2026 00:41:34 +0000 (0:00:00.524) 0:00:04.322 ******** 2026-04-09 00:41:37.397451 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:41:37.397463 | orchestrator | 2026-04-09 00:41:37.397474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397486 | orchestrator | Thursday 09 April 2026 00:41:35 +0000 (0:00:00.610) 0:00:04.932 ******** 2026-04-09 00:41:37.397503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-09 00:41:37.397515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-09 00:41:37.397526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-09 00:41:37.397538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-09 00:41:37.397549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-09 00:41:37.397560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-09 00:41:37.397571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-09 00:41:37.397583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-09 00:41:37.397594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-09 00:41:37.397605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-09 00:41:37.397616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-09 00:41:37.397628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-09 00:41:37.397648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-09 00:41:37.397659 | orchestrator | 2026-04-09 00:41:37.397670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397681 | orchestrator | Thursday 09 April 2026 00:41:35 +0000 (0:00:00.415) 0:00:05.348 ******** 2026-04-09 00:41:37.397693 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397704 | orchestrator | 2026-04-09 00:41:37.397715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397727 | orchestrator | Thursday 09 April 2026 00:41:36 +0000 (0:00:00.198) 0:00:05.547 ******** 2026-04-09 00:41:37.397738 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397749 | orchestrator | 2026-04-09 00:41:37.397761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397772 | orchestrator | Thursday 09 April 2026 00:41:36 +0000 (0:00:00.196) 0:00:05.744 ******** 2026-04-09 00:41:37.397784 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397795 | orchestrator | 2026-04-09 00:41:37.397806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397817 | orchestrator | Thursday 09 April 2026 00:41:36 +0000 (0:00:00.187) 0:00:05.931 ******** 2026-04-09 00:41:37.397829 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397840 | orchestrator | 2026-04-09 00:41:37.397851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397863 | orchestrator | Thursday 09 April 2026 00:41:36 +0000 (0:00:00.230) 0:00:06.162 ******** 2026-04-09 00:41:37.397874 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397886 | orchestrator | 2026-04-09 00:41:37.397897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397908 | orchestrator | Thursday 09 April 2026 00:41:36 +0000 (0:00:00.206) 0:00:06.369 ******** 2026-04-09 00:41:37.397920 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397931 | orchestrator | 2026-04-09 00:41:37.397942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:37.397954 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.194) 0:00:06.563 ******** 2026-04-09 00:41:37.397965 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:37.397976 | orchestrator | 2026-04-09 00:41:37.397994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972178 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.197) 0:00:06.760 ******** 2026-04-09 00:41:44.972280 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972295 | orchestrator | 2026-04-09 00:41:44.972306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972410 | orchestrator | Thursday 09 April 2026 00:41:37 +0000 (0:00:00.182) 0:00:06.943 ******** 2026-04-09 00:41:44.972424 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-09 00:41:44.972435 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-09 00:41:44.972446 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-09 00:41:44.972456 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-09 00:41:44.972467 | orchestrator | 2026-04-09 00:41:44.972478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972488 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.985) 0:00:07.928 ******** 2026-04-09 00:41:44.972499 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972509 | orchestrator | 2026-04-09 00:41:44.972519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972529 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.185) 0:00:08.113 ******** 2026-04-09 00:41:44.972540 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972550 | orchestrator | 2026-04-09 00:41:44.972560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972570 | orchestrator | Thursday 09 April 2026 00:41:38 +0000 (0:00:00.220) 0:00:08.334 ******** 2026-04-09 00:41:44.972605 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972616 | orchestrator | 2026-04-09 00:41:44.972626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:41:44.972636 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.179) 0:00:08.513 ******** 2026-04-09 00:41:44.972646 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972656 | orchestrator | 2026-04-09 00:41:44.972674 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:41:44.972690 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.197) 0:00:08.711 ******** 2026-04-09 00:41:44.972707 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972723 | orchestrator | 2026-04-09 00:41:44.972740 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:41:44.972759 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.130) 0:00:08.841 ******** 2026-04-09 00:41:44.972778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a7170513-cc74-5c6a-bf20-0648bd8fe211'}}) 2026-04-09 00:41:44.972796 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b054f04d-2068-53f2-80e7-c9a997d8c167'}}) 2026-04-09 00:41:44.972812 | orchestrator | 2026-04-09 00:41:44.972830 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:41:44.972843 | orchestrator | Thursday 09 April 2026 00:41:39 +0000 (0:00:00.178) 0:00:09.020 ******** 2026-04-09 00:41:44.972856 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'}) 2026-04-09 00:41:44.972869 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'}) 2026-04-09 00:41:44.972880 | orchestrator | 2026-04-09 00:41:44.972892 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:41:44.972904 | orchestrator | Thursday 09 April 2026 00:41:41 +0000 (0:00:01.907) 0:00:10.928 ******** 2026-04-09 00:41:44.972916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.972930 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.972941 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.972953 | orchestrator | 2026-04-09 00:41:44.972964 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:41:44.972976 | orchestrator | Thursday 09 April 2026 00:41:41 +0000 (0:00:00.146) 0:00:11.074 ******** 2026-04-09 00:41:44.972988 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'}) 2026-04-09 00:41:44.973000 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'}) 2026-04-09 00:41:44.973012 | orchestrator | 2026-04-09 00:41:44.973023 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:41:44.973034 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:01.411) 0:00:12.486 ******** 2026-04-09 00:41:44.973046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973069 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973080 | orchestrator | 2026-04-09 00:41:44.973092 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:41:44.973112 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.144) 0:00:12.631 ******** 2026-04-09 00:41:44.973139 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973150 | orchestrator | 2026-04-09 00:41:44.973161 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:41:44.973187 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.135) 0:00:12.766 ******** 2026-04-09 00:41:44.973198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973229 | orchestrator | 2026-04-09 00:41:44.973239 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:41:44.973250 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.337) 0:00:13.103 ******** 2026-04-09 00:41:44.973260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973270 | orchestrator | 2026-04-09 00:41:44.973281 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:41:44.973291 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.121) 0:00:13.225 ******** 2026-04-09 00:41:44.973301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973403 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973415 | orchestrator | 2026-04-09 00:41:44.973431 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:41:44.973441 | orchestrator | Thursday 09 April 2026 00:41:43 +0000 (0:00:00.141) 0:00:13.366 ******** 2026-04-09 00:41:44.973452 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973462 | orchestrator | 2026-04-09 00:41:44.973472 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:41:44.973483 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.123) 0:00:13.489 ******** 2026-04-09 00:41:44.973493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973514 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973524 | orchestrator | 2026-04-09 00:41:44.973534 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:41:44.973545 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.161) 0:00:13.651 ******** 2026-04-09 00:41:44.973555 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:44.973565 | orchestrator | 2026-04-09 00:41:44.973576 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:41:44.973586 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.134) 0:00:13.786 ******** 2026-04-09 00:41:44.973596 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973628 | orchestrator | 2026-04-09 00:41:44.973638 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:41:44.973649 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.137) 0:00:13.924 ******** 2026-04-09 00:41:44.973666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973687 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973697 | orchestrator | 2026-04-09 00:41:44.973708 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:41:44.973718 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.143) 0:00:14.067 ******** 2026-04-09 00:41:44.973734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:44.973751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:44.973770 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973787 | orchestrator | 2026-04-09 00:41:44.973804 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:41:44.973821 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.140) 0:00:14.208 ******** 2026-04-09 00:41:44.973838 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:44.973848 | orchestrator | 2026-04-09 00:41:44.973858 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:41:44.973876 | orchestrator | Thursday 09 April 2026 00:41:44 +0000 (0:00:00.129) 0:00:14.337 ******** 2026-04-09 00:41:50.536667 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.536748 | orchestrator | 2026-04-09 00:41:50.536758 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:41:50.536765 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.138) 0:00:14.475 ******** 2026-04-09 00:41:50.536771 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.536777 | orchestrator | 2026-04-09 00:41:50.536783 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:41:50.536789 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.129) 0:00:14.605 ******** 2026-04-09 00:41:50.536795 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:50.536802 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:41:50.536808 | orchestrator | } 2026-04-09 00:41:50.536815 | orchestrator | 2026-04-09 00:41:50.536821 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:41:50.536826 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.331) 0:00:14.937 ******** 2026-04-09 00:41:50.536832 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:50.536838 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:41:50.536843 | orchestrator | } 2026-04-09 00:41:50.536849 | orchestrator | 2026-04-09 00:41:50.536855 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:41:50.536860 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.134) 0:00:15.072 ******** 2026-04-09 00:41:50.536866 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:50.536872 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:41:50.536878 | orchestrator | } 2026-04-09 00:41:50.536883 | orchestrator | 2026-04-09 00:41:50.536889 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:41:50.536895 | orchestrator | Thursday 09 April 2026 00:41:45 +0000 (0:00:00.129) 0:00:15.202 ******** 2026-04-09 00:41:50.536900 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.536909 | orchestrator | 2026-04-09 00:41:50.536935 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:41:50.536945 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.593) 0:00:15.796 ******** 2026-04-09 00:41:50.536955 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.536986 | orchestrator | 2026-04-09 00:41:50.536997 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:41:50.537004 | orchestrator | Thursday 09 April 2026 00:41:46 +0000 (0:00:00.457) 0:00:16.253 ******** 2026-04-09 00:41:50.537010 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.537015 | orchestrator | 2026-04-09 00:41:50.537021 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:41:50.537026 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.485) 0:00:16.739 ******** 2026-04-09 00:41:50.537032 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:50.537038 | orchestrator | 2026-04-09 00:41:50.537043 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:41:50.537049 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.123) 0:00:16.862 ******** 2026-04-09 00:41:50.537054 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537060 | orchestrator | 2026-04-09 00:41:50.537066 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:41:50.537071 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.103) 0:00:16.966 ******** 2026-04-09 00:41:50.537077 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537083 | orchestrator | 2026-04-09 00:41:50.537088 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:41:50.537094 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.090) 0:00:17.057 ******** 2026-04-09 00:41:50.537099 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:50.537105 | orchestrator |  "vgs_report": { 2026-04-09 00:41:50.537111 | orchestrator |  "vg": [] 2026-04-09 00:41:50.537117 | orchestrator |  } 2026-04-09 00:41:50.537123 | orchestrator | } 2026-04-09 00:41:50.537128 | orchestrator | 2026-04-09 00:41:50.537134 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:41:50.537140 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.124) 0:00:17.181 ******** 2026-04-09 00:41:50.537145 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537151 | orchestrator | 2026-04-09 00:41:50.537156 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:41:50.537162 | orchestrator | Thursday 09 April 2026 00:41:47 +0000 (0:00:00.124) 0:00:17.306 ******** 2026-04-09 00:41:50.537168 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537174 | orchestrator | 2026-04-09 00:41:50.537180 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:41:50.537185 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.121) 0:00:17.427 ******** 2026-04-09 00:41:50.537191 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537196 | orchestrator | 2026-04-09 00:41:50.537202 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:41:50.537208 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.128) 0:00:17.556 ******** 2026-04-09 00:41:50.537214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537219 | orchestrator | 2026-04-09 00:41:50.537225 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:41:50.537230 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.251) 0:00:17.807 ******** 2026-04-09 00:41:50.537236 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537242 | orchestrator | 2026-04-09 00:41:50.537247 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:41:50.537253 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.121) 0:00:17.929 ******** 2026-04-09 00:41:50.537258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537264 | orchestrator | 2026-04-09 00:41:50.537270 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:41:50.537275 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.122) 0:00:18.051 ******** 2026-04-09 00:41:50.537281 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537287 | orchestrator | 2026-04-09 00:41:50.537292 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:41:50.537303 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.130) 0:00:18.182 ******** 2026-04-09 00:41:50.537335 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537342 | orchestrator | 2026-04-09 00:41:50.537347 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:41:50.537353 | orchestrator | Thursday 09 April 2026 00:41:48 +0000 (0:00:00.114) 0:00:18.297 ******** 2026-04-09 00:41:50.537359 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537364 | orchestrator | 2026-04-09 00:41:50.537370 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:41:50.537375 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.123) 0:00:18.421 ******** 2026-04-09 00:41:50.537381 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537387 | orchestrator | 2026-04-09 00:41:50.537393 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:41:50.537398 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.120) 0:00:18.541 ******** 2026-04-09 00:41:50.537404 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537410 | orchestrator | 2026-04-09 00:41:50.537415 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:41:50.537421 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.112) 0:00:18.654 ******** 2026-04-09 00:41:50.537426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537432 | orchestrator | 2026-04-09 00:41:50.537438 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:41:50.537443 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.127) 0:00:18.782 ******** 2026-04-09 00:41:50.537449 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537455 | orchestrator | 2026-04-09 00:41:50.537460 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:41:50.537466 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.129) 0:00:18.912 ******** 2026-04-09 00:41:50.537472 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537477 | orchestrator | 2026-04-09 00:41:50.537483 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:41:50.537489 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.117) 0:00:19.030 ******** 2026-04-09 00:41:50.537496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:50.537510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:50.537515 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537521 | orchestrator | 2026-04-09 00:41:50.537527 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:41:50.537532 | orchestrator | Thursday 09 April 2026 00:41:49 +0000 (0:00:00.138) 0:00:19.169 ******** 2026-04-09 00:41:50.537538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:50.537544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:50.537549 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537555 | orchestrator | 2026-04-09 00:41:50.537561 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:41:50.537566 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.243) 0:00:19.412 ******** 2026-04-09 00:41:50.537572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:50.537578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:50.537588 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537593 | orchestrator | 2026-04-09 00:41:50.537599 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:41:50.537605 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.128) 0:00:19.541 ******** 2026-04-09 00:41:50.537610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:50.537616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:50.537622 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537627 | orchestrator | 2026-04-09 00:41:50.537633 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:41:50.537639 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.147) 0:00:19.688 ******** 2026-04-09 00:41:50.537644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:50.537650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:50.537656 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:50.537661 | orchestrator | 2026-04-09 00:41:50.537667 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:41:50.537673 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.160) 0:00:19.849 ******** 2026-04-09 00:41:50.537682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.212850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.213903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.213982 | orchestrator | 2026-04-09 00:41:55.214000 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:41:55.214014 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.137) 0:00:19.987 ******** 2026-04-09 00:41:55.214087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.214100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.214112 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.214124 | orchestrator | 2026-04-09 00:41:55.214136 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:41:55.214147 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.138) 0:00:20.126 ******** 2026-04-09 00:41:55.214159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.214188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.214214 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.214236 | orchestrator | 2026-04-09 00:41:55.214248 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:41:55.214260 | orchestrator | Thursday 09 April 2026 00:41:50 +0000 (0:00:00.125) 0:00:20.251 ******** 2026-04-09 00:41:55.214271 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:55.214283 | orchestrator | 2026-04-09 00:41:55.214295 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:41:55.214356 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.449) 0:00:20.701 ******** 2026-04-09 00:41:55.214370 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:55.214381 | orchestrator | 2026-04-09 00:41:55.214393 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:41:55.214404 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.439) 0:00:21.141 ******** 2026-04-09 00:41:55.214416 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:41:55.214427 | orchestrator | 2026-04-09 00:41:55.214439 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:41:55.214451 | orchestrator | Thursday 09 April 2026 00:41:51 +0000 (0:00:00.132) 0:00:21.273 ******** 2026-04-09 00:41:55.214463 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'vg_name': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'}) 2026-04-09 00:41:55.214476 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'vg_name': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'}) 2026-04-09 00:41:55.214487 | orchestrator | 2026-04-09 00:41:55.214499 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:41:55.214512 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.144) 0:00:21.418 ******** 2026-04-09 00:41:55.214523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.214535 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.214546 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.214558 | orchestrator | 2026-04-09 00:41:55.214570 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:41:55.214582 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.133) 0:00:21.551 ******** 2026-04-09 00:41:55.214593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.214605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.214617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.214629 | orchestrator | 2026-04-09 00:41:55.214641 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:41:55.214652 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.272) 0:00:21.824 ******** 2026-04-09 00:41:55.214664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'})  2026-04-09 00:41:55.214675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'})  2026-04-09 00:41:55.214688 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:41:55.214699 | orchestrator | 2026-04-09 00:41:55.214710 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:41:55.214722 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.143) 0:00:21.968 ******** 2026-04-09 00:41:55.214762 | orchestrator | ok: [testbed-node-3] => { 2026-04-09 00:41:55.214833 | orchestrator |  "lvm_report": { 2026-04-09 00:41:55.214848 | orchestrator |  "lv": [ 2026-04-09 00:41:55.214859 | orchestrator |  { 2026-04-09 00:41:55.214871 | orchestrator |  "lv_name": "osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211", 2026-04-09 00:41:55.214883 | orchestrator |  "vg_name": "ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211" 2026-04-09 00:41:55.214895 | orchestrator |  }, 2026-04-09 00:41:55.214906 | orchestrator |  { 2026-04-09 00:41:55.214917 | orchestrator |  "lv_name": "osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167", 2026-04-09 00:41:55.214938 | orchestrator |  "vg_name": "ceph-b054f04d-2068-53f2-80e7-c9a997d8c167" 2026-04-09 00:41:55.214950 | orchestrator |  } 2026-04-09 00:41:55.214961 | orchestrator |  ], 2026-04-09 00:41:55.214973 | orchestrator |  "pv": [ 2026-04-09 00:41:55.214984 | orchestrator |  { 2026-04-09 00:41:55.214996 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:41:55.215008 | orchestrator |  "vg_name": "ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211" 2026-04-09 00:41:55.215055 | orchestrator |  }, 2026-04-09 00:41:55.215069 | orchestrator |  { 2026-04-09 00:41:55.215080 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:41:55.215092 | orchestrator |  "vg_name": "ceph-b054f04d-2068-53f2-80e7-c9a997d8c167" 2026-04-09 00:41:55.215103 | orchestrator |  } 2026-04-09 00:41:55.215114 | orchestrator |  ] 2026-04-09 00:41:55.215126 | orchestrator |  } 2026-04-09 00:41:55.215137 | orchestrator | } 2026-04-09 00:41:55.215149 | orchestrator | 2026-04-09 00:41:55.215161 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:41:55.215173 | orchestrator | 2026-04-09 00:41:55.215184 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:41:55.215197 | orchestrator | Thursday 09 April 2026 00:41:52 +0000 (0:00:00.258) 0:00:22.226 ******** 2026-04-09 00:41:55.215209 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-09 00:41:55.215220 | orchestrator | 2026-04-09 00:41:55.215232 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:41:55.215244 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.241) 0:00:22.468 ******** 2026-04-09 00:41:55.215256 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:41:55.215267 | orchestrator | 2026-04-09 00:41:55.215279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215290 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.198) 0:00:22.666 ******** 2026-04-09 00:41:55.215302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:41:55.215313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:41:55.215355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:41:55.215368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:41:55.215379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:41:55.215391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:41:55.215402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:41:55.215413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:41:55.215425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-09 00:41:55.215436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:41:55.215448 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:41:55.215459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:41:55.215471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:41:55.215483 | orchestrator | 2026-04-09 00:41:55.215494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215506 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.360) 0:00:23.027 ******** 2026-04-09 00:41:55.215517 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215529 | orchestrator | 2026-04-09 00:41:55.215540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215560 | orchestrator | Thursday 09 April 2026 00:41:53 +0000 (0:00:00.177) 0:00:23.204 ******** 2026-04-09 00:41:55.215572 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215583 | orchestrator | 2026-04-09 00:41:55.215594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215606 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.173) 0:00:23.379 ******** 2026-04-09 00:41:55.215617 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215629 | orchestrator | 2026-04-09 00:41:55.215641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215652 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.176) 0:00:23.555 ******** 2026-04-09 00:41:55.215664 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215676 | orchestrator | 2026-04-09 00:41:55.215687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215699 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.586) 0:00:24.142 ******** 2026-04-09 00:41:55.215711 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215722 | orchestrator | 2026-04-09 00:41:55.215734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:41:55.215746 | orchestrator | Thursday 09 April 2026 00:41:54 +0000 (0:00:00.214) 0:00:24.357 ******** 2026-04-09 00:41:55.215757 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:41:55.215768 | orchestrator | 2026-04-09 00:41:55.215788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662261 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.220) 0:00:24.577 ******** 2026-04-09 00:42:04.662374 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662384 | orchestrator | 2026-04-09 00:42:04.662390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662411 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.189) 0:00:24.767 ******** 2026-04-09 00:42:04.662416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662420 | orchestrator | 2026-04-09 00:42:04.662425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662430 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.188) 0:00:24.955 ******** 2026-04-09 00:42:04.662434 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc) 2026-04-09 00:42:04.662440 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc) 2026-04-09 00:42:04.662445 | orchestrator | 2026-04-09 00:42:04.662449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662454 | orchestrator | Thursday 09 April 2026 00:41:55 +0000 (0:00:00.406) 0:00:25.361 ******** 2026-04-09 00:42:04.662458 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b) 2026-04-09 00:42:04.662462 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b) 2026-04-09 00:42:04.662466 | orchestrator | 2026-04-09 00:42:04.662471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662477 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.423) 0:00:25.785 ******** 2026-04-09 00:42:04.662482 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961) 2026-04-09 00:42:04.662486 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961) 2026-04-09 00:42:04.662490 | orchestrator | 2026-04-09 00:42:04.662494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662499 | orchestrator | Thursday 09 April 2026 00:41:56 +0000 (0:00:00.421) 0:00:26.206 ******** 2026-04-09 00:42:04.662503 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa) 2026-04-09 00:42:04.662521 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa) 2026-04-09 00:42:04.662526 | orchestrator | 2026-04-09 00:42:04.662530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:04.662534 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.406) 0:00:26.613 ******** 2026-04-09 00:42:04.662538 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:04.662542 | orchestrator | 2026-04-09 00:42:04.662547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662551 | orchestrator | Thursday 09 April 2026 00:41:57 +0000 (0:00:00.312) 0:00:26.925 ******** 2026-04-09 00:42:04.662555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-09 00:42:04.662560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-09 00:42:04.662564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-09 00:42:04.662568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-09 00:42:04.662572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-09 00:42:04.662576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-09 00:42:04.662580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-09 00:42:04.662584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-09 00:42:04.662589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-09 00:42:04.662593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-09 00:42:04.662597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-09 00:42:04.662601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-09 00:42:04.662605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-09 00:42:04.662609 | orchestrator | 2026-04-09 00:42:04.662614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662618 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.574) 0:00:27.500 ******** 2026-04-09 00:42:04.662622 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662626 | orchestrator | 2026-04-09 00:42:04.662630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662634 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.185) 0:00:27.685 ******** 2026-04-09 00:42:04.662639 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662643 | orchestrator | 2026-04-09 00:42:04.662647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662651 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.189) 0:00:27.874 ******** 2026-04-09 00:42:04.662655 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662659 | orchestrator | 2026-04-09 00:42:04.662674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662679 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.172) 0:00:28.047 ******** 2026-04-09 00:42:04.662683 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662687 | orchestrator | 2026-04-09 00:42:04.662691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662695 | orchestrator | Thursday 09 April 2026 00:41:58 +0000 (0:00:00.174) 0:00:28.221 ******** 2026-04-09 00:42:04.662700 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662704 | orchestrator | 2026-04-09 00:42:04.662708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662716 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.172) 0:00:28.394 ******** 2026-04-09 00:42:04.662720 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662725 | orchestrator | 2026-04-09 00:42:04.662729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662733 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.176) 0:00:28.571 ******** 2026-04-09 00:42:04.662737 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662741 | orchestrator | 2026-04-09 00:42:04.662746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662750 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.177) 0:00:28.748 ******** 2026-04-09 00:42:04.662754 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662758 | orchestrator | 2026-04-09 00:42:04.662762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662766 | orchestrator | Thursday 09 April 2026 00:41:59 +0000 (0:00:00.178) 0:00:28.927 ******** 2026-04-09 00:42:04.662773 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-09 00:42:04.662778 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-09 00:42:04.662783 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-09 00:42:04.662787 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-09 00:42:04.662791 | orchestrator | 2026-04-09 00:42:04.662795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662799 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.693) 0:00:29.620 ******** 2026-04-09 00:42:04.662803 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662808 | orchestrator | 2026-04-09 00:42:04.662812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662816 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.156) 0:00:29.777 ******** 2026-04-09 00:42:04.662820 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662824 | orchestrator | 2026-04-09 00:42:04.662828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662833 | orchestrator | Thursday 09 April 2026 00:42:00 +0000 (0:00:00.175) 0:00:29.953 ******** 2026-04-09 00:42:04.662837 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662841 | orchestrator | 2026-04-09 00:42:04.662845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:04.662849 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.462) 0:00:30.416 ******** 2026-04-09 00:42:04.662853 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662858 | orchestrator | 2026-04-09 00:42:04.662862 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:42:04.662866 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.180) 0:00:30.596 ******** 2026-04-09 00:42:04.662870 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662874 | orchestrator | 2026-04-09 00:42:04.662878 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:42:04.662882 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.118) 0:00:30.715 ******** 2026-04-09 00:42:04.662887 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}}) 2026-04-09 00:42:04.662891 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c145dd89-b6cf-5d58-ae96-f0c6197297d1'}}) 2026-04-09 00:42:04.662896 | orchestrator | 2026-04-09 00:42:04.662900 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:42:04.662904 | orchestrator | Thursday 09 April 2026 00:42:01 +0000 (0:00:00.157) 0:00:30.873 ******** 2026-04-09 00:42:04.662909 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}) 2026-04-09 00:42:04.662915 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'}) 2026-04-09 00:42:04.662927 | orchestrator | 2026-04-09 00:42:04.662932 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:42:04.662936 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:01.825) 0:00:32.698 ******** 2026-04-09 00:42:04.662940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:04.662945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:04.662950 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:04.662954 | orchestrator | 2026-04-09 00:42:04.662958 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:42:04.662962 | orchestrator | Thursday 09 April 2026 00:42:03 +0000 (0:00:00.120) 0:00:32.819 ******** 2026-04-09 00:42:04.662967 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}) 2026-04-09 00:42:04.662974 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'}) 2026-04-09 00:42:09.861769 | orchestrator | 2026-04-09 00:42:09.861870 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:42:09.861901 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:01.286) 0:00:34.105 ******** 2026-04-09 00:42:09.861915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.861928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.861940 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.861953 | orchestrator | 2026-04-09 00:42:09.861964 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:42:09.861976 | orchestrator | Thursday 09 April 2026 00:42:04 +0000 (0:00:00.153) 0:00:34.259 ******** 2026-04-09 00:42:09.861988 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.861999 | orchestrator | 2026-04-09 00:42:09.862011 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:42:09.862077 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.168) 0:00:34.428 ******** 2026-04-09 00:42:09.862089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862113 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862124 | orchestrator | 2026-04-09 00:42:09.862136 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:42:09.862148 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.139) 0:00:34.568 ******** 2026-04-09 00:42:09.862159 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862170 | orchestrator | 2026-04-09 00:42:09.862182 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:42:09.862194 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.131) 0:00:34.699 ******** 2026-04-09 00:42:09.862206 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862229 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862264 | orchestrator | 2026-04-09 00:42:09.862276 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:42:09.862288 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.146) 0:00:34.846 ******** 2026-04-09 00:42:09.862299 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862311 | orchestrator | 2026-04-09 00:42:09.862324 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:42:09.862356 | orchestrator | Thursday 09 April 2026 00:42:05 +0000 (0:00:00.384) 0:00:35.230 ******** 2026-04-09 00:42:09.862369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862393 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862404 | orchestrator | 2026-04-09 00:42:09.862416 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:42:09.862427 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.176) 0:00:35.407 ******** 2026-04-09 00:42:09.862439 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:09.862452 | orchestrator | 2026-04-09 00:42:09.862463 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:42:09.862475 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.144) 0:00:35.552 ******** 2026-04-09 00:42:09.862486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862510 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862521 | orchestrator | 2026-04-09 00:42:09.862533 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:42:09.862545 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.147) 0:00:35.699 ******** 2026-04-09 00:42:09.862556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862579 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862591 | orchestrator | 2026-04-09 00:42:09.862603 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:42:09.862633 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.130) 0:00:35.830 ******** 2026-04-09 00:42:09.862662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:09.862675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:09.862686 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862698 | orchestrator | 2026-04-09 00:42:09.862709 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:42:09.862721 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.143) 0:00:35.973 ******** 2026-04-09 00:42:09.862732 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862743 | orchestrator | 2026-04-09 00:42:09.862755 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:42:09.862767 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.119) 0:00:36.093 ******** 2026-04-09 00:42:09.862778 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862797 | orchestrator | 2026-04-09 00:42:09.862808 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:42:09.862820 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.103) 0:00:36.197 ******** 2026-04-09 00:42:09.862831 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.862843 | orchestrator | 2026-04-09 00:42:09.862859 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:42:09.862871 | orchestrator | Thursday 09 April 2026 00:42:06 +0000 (0:00:00.111) 0:00:36.308 ******** 2026-04-09 00:42:09.862882 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:09.862894 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:42:09.862906 | orchestrator | } 2026-04-09 00:42:09.862918 | orchestrator | 2026-04-09 00:42:09.862929 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:42:09.862940 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.125) 0:00:36.434 ******** 2026-04-09 00:42:09.862952 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:09.862963 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:42:09.862975 | orchestrator | } 2026-04-09 00:42:09.862986 | orchestrator | 2026-04-09 00:42:09.862998 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:42:09.863010 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.128) 0:00:36.563 ******** 2026-04-09 00:42:09.863021 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:09.863033 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:42:09.863044 | orchestrator | } 2026-04-09 00:42:09.863056 | orchestrator | 2026-04-09 00:42:09.863067 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:42:09.863079 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.142) 0:00:36.705 ******** 2026-04-09 00:42:09.863090 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:09.863101 | orchestrator | 2026-04-09 00:42:09.863113 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:42:09.863124 | orchestrator | Thursday 09 April 2026 00:42:07 +0000 (0:00:00.615) 0:00:37.320 ******** 2026-04-09 00:42:09.863135 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:09.863147 | orchestrator | 2026-04-09 00:42:09.863158 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:42:09.863170 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.490) 0:00:37.811 ******** 2026-04-09 00:42:09.863181 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:09.863193 | orchestrator | 2026-04-09 00:42:09.863204 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:42:09.863215 | orchestrator | Thursday 09 April 2026 00:42:08 +0000 (0:00:00.484) 0:00:38.296 ******** 2026-04-09 00:42:09.863227 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:09.863238 | orchestrator | 2026-04-09 00:42:09.863250 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:42:09.863261 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.136) 0:00:38.433 ******** 2026-04-09 00:42:09.863272 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863284 | orchestrator | 2026-04-09 00:42:09.863295 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:42:09.863307 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.089) 0:00:38.523 ******** 2026-04-09 00:42:09.863318 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863330 | orchestrator | 2026-04-09 00:42:09.863364 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:42:09.863376 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.090) 0:00:38.613 ******** 2026-04-09 00:42:09.863387 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:09.863399 | orchestrator |  "vgs_report": { 2026-04-09 00:42:09.863411 | orchestrator |  "vg": [] 2026-04-09 00:42:09.863423 | orchestrator |  } 2026-04-09 00:42:09.863434 | orchestrator | } 2026-04-09 00:42:09.863446 | orchestrator | 2026-04-09 00:42:09.863458 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:42:09.863476 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.136) 0:00:38.750 ******** 2026-04-09 00:42:09.863487 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863499 | orchestrator | 2026-04-09 00:42:09.863510 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:42:09.863522 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.120) 0:00:38.870 ******** 2026-04-09 00:42:09.863533 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863545 | orchestrator | 2026-04-09 00:42:09.863556 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:42:09.863568 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.110) 0:00:38.981 ******** 2026-04-09 00:42:09.863579 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863590 | orchestrator | 2026-04-09 00:42:09.863601 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:42:09.863613 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.125) 0:00:39.107 ******** 2026-04-09 00:42:09.863625 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:09.863636 | orchestrator | 2026-04-09 00:42:09.863654 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:42:13.882286 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.121) 0:00:39.229 ******** 2026-04-09 00:42:13.882466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882487 | orchestrator | 2026-04-09 00:42:13.882501 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:42:13.882513 | orchestrator | Thursday 09 April 2026 00:42:09 +0000 (0:00:00.101) 0:00:39.330 ******** 2026-04-09 00:42:13.882525 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882537 | orchestrator | 2026-04-09 00:42:13.882548 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:42:13.882560 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.237) 0:00:39.568 ******** 2026-04-09 00:42:13.882572 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882583 | orchestrator | 2026-04-09 00:42:13.882595 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:42:13.882606 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.125) 0:00:39.693 ******** 2026-04-09 00:42:13.882618 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882629 | orchestrator | 2026-04-09 00:42:13.882640 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:42:13.882652 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.122) 0:00:39.816 ******** 2026-04-09 00:42:13.882663 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882675 | orchestrator | 2026-04-09 00:42:13.882704 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:42:13.882716 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.126) 0:00:39.943 ******** 2026-04-09 00:42:13.882728 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882739 | orchestrator | 2026-04-09 00:42:13.882751 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:42:13.882762 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.123) 0:00:40.066 ******** 2026-04-09 00:42:13.882773 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882785 | orchestrator | 2026-04-09 00:42:13.882796 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:42:13.882808 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.106) 0:00:40.173 ******** 2026-04-09 00:42:13.882822 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882835 | orchestrator | 2026-04-09 00:42:13.882848 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:42:13.882861 | orchestrator | Thursday 09 April 2026 00:42:10 +0000 (0:00:00.120) 0:00:40.293 ******** 2026-04-09 00:42:13.882873 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882886 | orchestrator | 2026-04-09 00:42:13.882899 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:42:13.882933 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.114) 0:00:40.408 ******** 2026-04-09 00:42:13.882946 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.882960 | orchestrator | 2026-04-09 00:42:13.882973 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:42:13.882986 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.120) 0:00:40.528 ******** 2026-04-09 00:42:13.883001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883014 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883026 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883038 | orchestrator | 2026-04-09 00:42:13.883050 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:42:13.883061 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.131) 0:00:40.660 ******** 2026-04-09 00:42:13.883073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883096 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883108 | orchestrator | 2026-04-09 00:42:13.883120 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:42:13.883131 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.132) 0:00:40.792 ******** 2026-04-09 00:42:13.883143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883166 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883177 | orchestrator | 2026-04-09 00:42:13.883189 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:42:13.883200 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.129) 0:00:40.922 ******** 2026-04-09 00:42:13.883212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883236 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883248 | orchestrator | 2026-04-09 00:42:13.883279 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:42:13.883292 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.258) 0:00:41.180 ******** 2026-04-09 00:42:13.883304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883328 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883368 | orchestrator | 2026-04-09 00:42:13.883388 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:42:13.883407 | orchestrator | Thursday 09 April 2026 00:42:11 +0000 (0:00:00.129) 0:00:41.310 ******** 2026-04-09 00:42:13.883427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883474 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883486 | orchestrator | 2026-04-09 00:42:13.883497 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:42:13.883509 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.137) 0:00:41.447 ******** 2026-04-09 00:42:13.883520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883543 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883555 | orchestrator | 2026-04-09 00:42:13.883566 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:42:13.883577 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.119) 0:00:41.566 ******** 2026-04-09 00:42:13.883589 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883612 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883623 | orchestrator | 2026-04-09 00:42:13.883635 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:42:13.883646 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.134) 0:00:41.700 ******** 2026-04-09 00:42:13.883657 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:13.883669 | orchestrator | 2026-04-09 00:42:13.883680 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:42:13.883692 | orchestrator | Thursday 09 April 2026 00:42:12 +0000 (0:00:00.484) 0:00:42.185 ******** 2026-04-09 00:42:13.883703 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:13.883715 | orchestrator | 2026-04-09 00:42:13.883726 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:42:13.883737 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.532) 0:00:42.717 ******** 2026-04-09 00:42:13.883749 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:13.883760 | orchestrator | 2026-04-09 00:42:13.883772 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:42:13.883783 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.139) 0:00:42.857 ******** 2026-04-09 00:42:13.883794 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'vg_name': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}) 2026-04-09 00:42:13.883807 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'vg_name': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'}) 2026-04-09 00:42:13.883819 | orchestrator | 2026-04-09 00:42:13.883830 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:42:13.883841 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.177) 0:00:43.035 ******** 2026-04-09 00:42:13.883853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:13.883876 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:13.883887 | orchestrator | 2026-04-09 00:42:13.883905 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:42:13.883917 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.141) 0:00:43.176 ******** 2026-04-09 00:42:13.883928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:13.883947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:19.203191 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:19.203259 | orchestrator | 2026-04-09 00:42:19.203267 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:42:19.203273 | orchestrator | Thursday 09 April 2026 00:42:13 +0000 (0:00:00.139) 0:00:43.316 ******** 2026-04-09 00:42:19.203278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'})  2026-04-09 00:42:19.203284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'})  2026-04-09 00:42:19.203288 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:19.203293 | orchestrator | 2026-04-09 00:42:19.203299 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:42:19.203305 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.135) 0:00:43.451 ******** 2026-04-09 00:42:19.203311 | orchestrator | ok: [testbed-node-4] => { 2026-04-09 00:42:19.203316 | orchestrator |  "lvm_report": { 2026-04-09 00:42:19.203328 | orchestrator |  "lv": [ 2026-04-09 00:42:19.203376 | orchestrator |  { 2026-04-09 00:42:19.203384 | orchestrator |  "lv_name": "osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e", 2026-04-09 00:42:19.203391 | orchestrator |  "vg_name": "ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e" 2026-04-09 00:42:19.203397 | orchestrator |  }, 2026-04-09 00:42:19.203402 | orchestrator |  { 2026-04-09 00:42:19.203408 | orchestrator |  "lv_name": "osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1", 2026-04-09 00:42:19.203414 | orchestrator |  "vg_name": "ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1" 2026-04-09 00:42:19.203419 | orchestrator |  } 2026-04-09 00:42:19.203426 | orchestrator |  ], 2026-04-09 00:42:19.203432 | orchestrator |  "pv": [ 2026-04-09 00:42:19.203438 | orchestrator |  { 2026-04-09 00:42:19.203445 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:42:19.203451 | orchestrator |  "vg_name": "ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e" 2026-04-09 00:42:19.203457 | orchestrator |  }, 2026-04-09 00:42:19.203463 | orchestrator |  { 2026-04-09 00:42:19.203469 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:42:19.203473 | orchestrator |  "vg_name": "ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1" 2026-04-09 00:42:19.203515 | orchestrator |  } 2026-04-09 00:42:19.203520 | orchestrator |  ] 2026-04-09 00:42:19.203525 | orchestrator |  } 2026-04-09 00:42:19.203529 | orchestrator | } 2026-04-09 00:42:19.203533 | orchestrator | 2026-04-09 00:42:19.203538 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-09 00:42:19.203542 | orchestrator | 2026-04-09 00:42:19.203546 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-09 00:42:19.203550 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.409) 0:00:43.861 ******** 2026-04-09 00:42:19.203555 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-09 00:42:19.203559 | orchestrator | 2026-04-09 00:42:19.203563 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-09 00:42:19.203567 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.218) 0:00:44.079 ******** 2026-04-09 00:42:19.203571 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:19.203588 | orchestrator | 2026-04-09 00:42:19.203593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203597 | orchestrator | Thursday 09 April 2026 00:42:14 +0000 (0:00:00.206) 0:00:44.285 ******** 2026-04-09 00:42:19.203601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:19.203605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:19.203609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:19.203613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:19.203619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:19.203623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:19.203627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:19.203631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:19.203635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:19.203639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:19.203643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:19.203647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:19.203651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:19.203655 | orchestrator | 2026-04-09 00:42:19.203659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203662 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.376) 0:00:44.662 ******** 2026-04-09 00:42:19.203666 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203670 | orchestrator | 2026-04-09 00:42:19.203674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203678 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.203) 0:00:44.865 ******** 2026-04-09 00:42:19.203682 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203686 | orchestrator | 2026-04-09 00:42:19.203690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203707 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.177) 0:00:45.043 ******** 2026-04-09 00:42:19.203711 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203715 | orchestrator | 2026-04-09 00:42:19.203719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203723 | orchestrator | Thursday 09 April 2026 00:42:15 +0000 (0:00:00.177) 0:00:45.221 ******** 2026-04-09 00:42:19.203727 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203731 | orchestrator | 2026-04-09 00:42:19.203735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203739 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.176) 0:00:45.398 ******** 2026-04-09 00:42:19.203743 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203747 | orchestrator | 2026-04-09 00:42:19.203751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203755 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.175) 0:00:45.573 ******** 2026-04-09 00:42:19.203759 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203763 | orchestrator | 2026-04-09 00:42:19.203767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203771 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.436) 0:00:46.009 ******** 2026-04-09 00:42:19.203778 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203782 | orchestrator | 2026-04-09 00:42:19.203789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203793 | orchestrator | Thursday 09 April 2026 00:42:16 +0000 (0:00:00.173) 0:00:46.183 ******** 2026-04-09 00:42:19.203797 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:19.203801 | orchestrator | 2026-04-09 00:42:19.203805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203809 | orchestrator | Thursday 09 April 2026 00:42:17 +0000 (0:00:00.198) 0:00:46.382 ******** 2026-04-09 00:42:19.203813 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961) 2026-04-09 00:42:19.203818 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961) 2026-04-09 00:42:19.203822 | orchestrator | 2026-04-09 00:42:19.203826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203830 | orchestrator | Thursday 09 April 2026 00:42:17 +0000 (0:00:00.363) 0:00:46.745 ******** 2026-04-09 00:42:19.203834 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554) 2026-04-09 00:42:19.203838 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554) 2026-04-09 00:42:19.203842 | orchestrator | 2026-04-09 00:42:19.203846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203850 | orchestrator | Thursday 09 April 2026 00:42:17 +0000 (0:00:00.376) 0:00:47.122 ******** 2026-04-09 00:42:19.203854 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02) 2026-04-09 00:42:19.203858 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02) 2026-04-09 00:42:19.203861 | orchestrator | 2026-04-09 00:42:19.203865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203869 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.393) 0:00:47.516 ******** 2026-04-09 00:42:19.203873 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6) 2026-04-09 00:42:19.203877 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6) 2026-04-09 00:42:19.203882 | orchestrator | 2026-04-09 00:42:19.203885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-09 00:42:19.203889 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.428) 0:00:47.944 ******** 2026-04-09 00:42:19.203893 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-09 00:42:19.203897 | orchestrator | 2026-04-09 00:42:19.203901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:19.203905 | orchestrator | Thursday 09 April 2026 00:42:18 +0000 (0:00:00.298) 0:00:48.242 ******** 2026-04-09 00:42:19.203909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-09 00:42:19.203913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-09 00:42:19.203917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-09 00:42:19.203921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-09 00:42:19.203925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-09 00:42:19.203929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-09 00:42:19.203933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-09 00:42:19.203937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-09 00:42:19.203941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-09 00:42:19.203945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-09 00:42:19.203952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-09 00:42:19.203959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-09 00:42:27.641445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-09 00:42:27.641560 | orchestrator | 2026-04-09 00:42:27.641575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641598 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.403) 0:00:48.646 ******** 2026-04-09 00:42:27.641609 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641620 | orchestrator | 2026-04-09 00:42:27.641631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641641 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.176) 0:00:48.822 ******** 2026-04-09 00:42:27.641651 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641661 | orchestrator | 2026-04-09 00:42:27.641671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641682 | orchestrator | Thursday 09 April 2026 00:42:19 +0000 (0:00:00.171) 0:00:48.994 ******** 2026-04-09 00:42:27.641692 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641702 | orchestrator | 2026-04-09 00:42:27.641712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641737 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.441) 0:00:49.435 ******** 2026-04-09 00:42:27.641748 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641758 | orchestrator | 2026-04-09 00:42:27.641768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641778 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.212) 0:00:49.647 ******** 2026-04-09 00:42:27.641788 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641798 | orchestrator | 2026-04-09 00:42:27.641809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641819 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.183) 0:00:49.831 ******** 2026-04-09 00:42:27.641829 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641839 | orchestrator | 2026-04-09 00:42:27.641849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641859 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.173) 0:00:50.005 ******** 2026-04-09 00:42:27.641869 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641879 | orchestrator | 2026-04-09 00:42:27.641890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641900 | orchestrator | Thursday 09 April 2026 00:42:20 +0000 (0:00:00.200) 0:00:50.206 ******** 2026-04-09 00:42:27.641910 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.641920 | orchestrator | 2026-04-09 00:42:27.641930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.641940 | orchestrator | Thursday 09 April 2026 00:42:21 +0000 (0:00:00.200) 0:00:50.407 ******** 2026-04-09 00:42:27.641951 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-09 00:42:27.641962 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-09 00:42:27.641973 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-09 00:42:27.641983 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-09 00:42:27.641993 | orchestrator | 2026-04-09 00:42:27.642009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.642176 | orchestrator | Thursday 09 April 2026 00:42:21 +0000 (0:00:00.639) 0:00:51.047 ******** 2026-04-09 00:42:27.642189 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642199 | orchestrator | 2026-04-09 00:42:27.642209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.642219 | orchestrator | Thursday 09 April 2026 00:42:21 +0000 (0:00:00.217) 0:00:51.264 ******** 2026-04-09 00:42:27.642248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642264 | orchestrator | 2026-04-09 00:42:27.642282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.642300 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:00.201) 0:00:51.466 ******** 2026-04-09 00:42:27.642316 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642332 | orchestrator | 2026-04-09 00:42:27.642342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-09 00:42:27.642382 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:00.195) 0:00:51.661 ******** 2026-04-09 00:42:27.642393 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642403 | orchestrator | 2026-04-09 00:42:27.642413 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-09 00:42:27.642423 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:00.234) 0:00:51.896 ******** 2026-04-09 00:42:27.642433 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642443 | orchestrator | 2026-04-09 00:42:27.642453 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-09 00:42:27.642463 | orchestrator | Thursday 09 April 2026 00:42:22 +0000 (0:00:00.146) 0:00:52.043 ******** 2026-04-09 00:42:27.642473 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e1b9ff7a-7324-53df-902d-27a5c0e1e380'}}) 2026-04-09 00:42:27.642483 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}}) 2026-04-09 00:42:27.642493 | orchestrator | 2026-04-09 00:42:27.642503 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-09 00:42:27.642513 | orchestrator | Thursday 09 April 2026 00:42:23 +0000 (0:00:00.472) 0:00:52.515 ******** 2026-04-09 00:42:27.642525 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'}) 2026-04-09 00:42:27.642537 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}) 2026-04-09 00:42:27.642547 | orchestrator | 2026-04-09 00:42:27.642557 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-09 00:42:27.642585 | orchestrator | Thursday 09 April 2026 00:42:24 +0000 (0:00:01.800) 0:00:54.315 ******** 2026-04-09 00:42:27.642596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:27.642607 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:27.642617 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642627 | orchestrator | 2026-04-09 00:42:27.642637 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-09 00:42:27.642649 | orchestrator | Thursday 09 April 2026 00:42:25 +0000 (0:00:00.144) 0:00:54.460 ******** 2026-04-09 00:42:27.642666 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'}) 2026-04-09 00:42:27.642682 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}) 2026-04-09 00:42:27.642696 | orchestrator | 2026-04-09 00:42:27.642712 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-09 00:42:27.642728 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:01.343) 0:00:55.804 ******** 2026-04-09 00:42:27.642745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:27.642761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:27.642790 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642807 | orchestrator | 2026-04-09 00:42:27.642823 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-09 00:42:27.642840 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:00.154) 0:00:55.958 ******** 2026-04-09 00:42:27.642857 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642874 | orchestrator | 2026-04-09 00:42:27.642885 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-09 00:42:27.642895 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:00.122) 0:00:56.080 ******** 2026-04-09 00:42:27.642905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:27.642915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:27.642925 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642935 | orchestrator | 2026-04-09 00:42:27.642945 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-09 00:42:27.642955 | orchestrator | Thursday 09 April 2026 00:42:26 +0000 (0:00:00.153) 0:00:56.234 ******** 2026-04-09 00:42:27.642965 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.642974 | orchestrator | 2026-04-09 00:42:27.642984 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-09 00:42:27.642994 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.136) 0:00:56.371 ******** 2026-04-09 00:42:27.643004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:27.643014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:27.643024 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643034 | orchestrator | 2026-04-09 00:42:27.643044 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-09 00:42:27.643054 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.162) 0:00:56.533 ******** 2026-04-09 00:42:27.643064 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643074 | orchestrator | 2026-04-09 00:42:27.643084 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-09 00:42:27.643093 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.132) 0:00:56.666 ******** 2026-04-09 00:42:27.643103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:27.643114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:27.643126 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:27.643137 | orchestrator | 2026-04-09 00:42:27.643148 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-09 00:42:27.643159 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.145) 0:00:56.811 ******** 2026-04-09 00:42:27.643171 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:27.643182 | orchestrator | 2026-04-09 00:42:27.643193 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-09 00:42:27.643204 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.133) 0:00:56.944 ******** 2026-04-09 00:42:27.643226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:33.443166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:33.443271 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443282 | orchestrator | 2026-04-09 00:42:33.443289 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-09 00:42:33.443297 | orchestrator | Thursday 09 April 2026 00:42:27 +0000 (0:00:00.352) 0:00:57.297 ******** 2026-04-09 00:42:33.443303 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:33.443310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:33.443316 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443322 | orchestrator | 2026-04-09 00:42:33.443381 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-09 00:42:33.443389 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.164) 0:00:57.461 ******** 2026-04-09 00:42:33.443396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:33.443402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:33.443408 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443415 | orchestrator | 2026-04-09 00:42:33.443421 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-09 00:42:33.443427 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.174) 0:00:57.635 ******** 2026-04-09 00:42:33.443433 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443440 | orchestrator | 2026-04-09 00:42:33.443446 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-09 00:42:33.443452 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.154) 0:00:57.789 ******** 2026-04-09 00:42:33.443458 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443464 | orchestrator | 2026-04-09 00:42:33.443470 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-09 00:42:33.443476 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.134) 0:00:57.924 ******** 2026-04-09 00:42:33.443482 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443488 | orchestrator | 2026-04-09 00:42:33.443496 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-09 00:42:33.443502 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.137) 0:00:58.062 ******** 2026-04-09 00:42:33.443508 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:33.443515 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-09 00:42:33.443521 | orchestrator | } 2026-04-09 00:42:33.443528 | orchestrator | 2026-04-09 00:42:33.443534 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-09 00:42:33.443540 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.136) 0:00:58.198 ******** 2026-04-09 00:42:33.443546 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:33.443552 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-09 00:42:33.443558 | orchestrator | } 2026-04-09 00:42:33.443564 | orchestrator | 2026-04-09 00:42:33.443570 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-09 00:42:33.443576 | orchestrator | Thursday 09 April 2026 00:42:28 +0000 (0:00:00.141) 0:00:58.340 ******** 2026-04-09 00:42:33.443582 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:33.443588 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-09 00:42:33.443595 | orchestrator | } 2026-04-09 00:42:33.443601 | orchestrator | 2026-04-09 00:42:33.443607 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-09 00:42:33.443613 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.140) 0:00:58.480 ******** 2026-04-09 00:42:33.443624 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:33.443630 | orchestrator | 2026-04-09 00:42:33.443636 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-09 00:42:33.443642 | orchestrator | Thursday 09 April 2026 00:42:29 +0000 (0:00:00.544) 0:00:59.024 ******** 2026-04-09 00:42:33.443649 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:33.443655 | orchestrator | 2026-04-09 00:42:33.443661 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-09 00:42:33.443667 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.507) 0:00:59.532 ******** 2026-04-09 00:42:33.443673 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:33.443679 | orchestrator | 2026-04-09 00:42:33.443685 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-09 00:42:33.443691 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.499) 0:01:00.032 ******** 2026-04-09 00:42:33.443697 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:33.443703 | orchestrator | 2026-04-09 00:42:33.443709 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-09 00:42:33.443716 | orchestrator | Thursday 09 April 2026 00:42:30 +0000 (0:00:00.269) 0:01:00.301 ******** 2026-04-09 00:42:33.443722 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443728 | orchestrator | 2026-04-09 00:42:33.443736 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-09 00:42:33.443743 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.108) 0:01:00.410 ******** 2026-04-09 00:42:33.443750 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443756 | orchestrator | 2026-04-09 00:42:33.443763 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-09 00:42:33.443770 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.094) 0:01:00.504 ******** 2026-04-09 00:42:33.443777 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:33.443784 | orchestrator |  "vgs_report": { 2026-04-09 00:42:33.443791 | orchestrator |  "vg": [] 2026-04-09 00:42:33.443810 | orchestrator |  } 2026-04-09 00:42:33.443817 | orchestrator | } 2026-04-09 00:42:33.443824 | orchestrator | 2026-04-09 00:42:33.443831 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-09 00:42:33.443839 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.127) 0:01:00.632 ******** 2026-04-09 00:42:33.443846 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443853 | orchestrator | 2026-04-09 00:42:33.443860 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-09 00:42:33.443867 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.109) 0:01:00.742 ******** 2026-04-09 00:42:33.443874 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443881 | orchestrator | 2026-04-09 00:42:33.443888 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-09 00:42:33.443895 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.107) 0:01:00.850 ******** 2026-04-09 00:42:33.443902 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443909 | orchestrator | 2026-04-09 00:42:33.443916 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-09 00:42:33.443923 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.119) 0:01:00.969 ******** 2026-04-09 00:42:33.443933 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443940 | orchestrator | 2026-04-09 00:42:33.443947 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-09 00:42:33.443954 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.110) 0:01:01.080 ******** 2026-04-09 00:42:33.443961 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443968 | orchestrator | 2026-04-09 00:42:33.443975 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-09 00:42:33.443982 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.121) 0:01:01.201 ******** 2026-04-09 00:42:33.443988 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.443995 | orchestrator | 2026-04-09 00:42:33.444002 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-09 00:42:33.444013 | orchestrator | Thursday 09 April 2026 00:42:31 +0000 (0:00:00.112) 0:01:01.314 ******** 2026-04-09 00:42:33.444020 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444027 | orchestrator | 2026-04-09 00:42:33.444034 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-09 00:42:33.444041 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.125) 0:01:01.439 ******** 2026-04-09 00:42:33.444048 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444054 | orchestrator | 2026-04-09 00:42:33.444061 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-09 00:42:33.444069 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.108) 0:01:01.548 ******** 2026-04-09 00:42:33.444075 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444083 | orchestrator | 2026-04-09 00:42:33.444090 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-09 00:42:33.444097 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.239) 0:01:01.788 ******** 2026-04-09 00:42:33.444104 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444111 | orchestrator | 2026-04-09 00:42:33.444117 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-09 00:42:33.444123 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.121) 0:01:01.909 ******** 2026-04-09 00:42:33.444129 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444135 | orchestrator | 2026-04-09 00:42:33.444141 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-09 00:42:33.444147 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.108) 0:01:02.017 ******** 2026-04-09 00:42:33.444153 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444159 | orchestrator | 2026-04-09 00:42:33.444165 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-09 00:42:33.444171 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.156) 0:01:02.174 ******** 2026-04-09 00:42:33.444177 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444183 | orchestrator | 2026-04-09 00:42:33.444189 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-09 00:42:33.444195 | orchestrator | Thursday 09 April 2026 00:42:32 +0000 (0:00:00.140) 0:01:02.314 ******** 2026-04-09 00:42:33.444201 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444207 | orchestrator | 2026-04-09 00:42:33.444213 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-09 00:42:33.444220 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.162) 0:01:02.477 ******** 2026-04-09 00:42:33.444226 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:33.444232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:33.444238 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444244 | orchestrator | 2026-04-09 00:42:33.444250 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-09 00:42:33.444256 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.137) 0:01:02.614 ******** 2026-04-09 00:42:33.444262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:33.444269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:33.444275 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:33.444281 | orchestrator | 2026-04-09 00:42:33.444287 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-09 00:42:33.444293 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.137) 0:01:02.752 ******** 2026-04-09 00:42:33.444308 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199156 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199173 | orchestrator | 2026-04-09 00:42:36.199186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-09 00:42:36.199200 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.166) 0:01:02.918 ******** 2026-04-09 00:42:36.199212 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199252 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199264 | orchestrator | 2026-04-09 00:42:36.199276 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-09 00:42:36.199288 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.141) 0:01:03.060 ******** 2026-04-09 00:42:36.199299 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199323 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199334 | orchestrator | 2026-04-09 00:42:36.199346 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-09 00:42:36.199407 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.123) 0:01:03.183 ******** 2026-04-09 00:42:36.199419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199443 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199454 | orchestrator | 2026-04-09 00:42:36.199466 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-09 00:42:36.199477 | orchestrator | Thursday 09 April 2026 00:42:33 +0000 (0:00:00.137) 0:01:03.320 ******** 2026-04-09 00:42:36.199489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199500 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199512 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199523 | orchestrator | 2026-04-09 00:42:36.199535 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-09 00:42:36.199546 | orchestrator | Thursday 09 April 2026 00:42:34 +0000 (0:00:00.273) 0:01:03.594 ******** 2026-04-09 00:42:36.199558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199592 | orchestrator | 2026-04-09 00:42:36.199604 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-09 00:42:36.199638 | orchestrator | Thursday 09 April 2026 00:42:34 +0000 (0:00:00.129) 0:01:03.723 ******** 2026-04-09 00:42:36.199650 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:36.199671 | orchestrator | 2026-04-09 00:42:36.199690 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-09 00:42:36.199708 | orchestrator | Thursday 09 April 2026 00:42:34 +0000 (0:00:00.492) 0:01:04.215 ******** 2026-04-09 00:42:36.199724 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:36.199743 | orchestrator | 2026-04-09 00:42:36.199763 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-09 00:42:36.199782 | orchestrator | Thursday 09 April 2026 00:42:35 +0000 (0:00:00.466) 0:01:04.682 ******** 2026-04-09 00:42:36.199801 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:36.199813 | orchestrator | 2026-04-09 00:42:36.199824 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-09 00:42:36.199836 | orchestrator | Thursday 09 April 2026 00:42:35 +0000 (0:00:00.137) 0:01:04.820 ******** 2026-04-09 00:42:36.199848 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'vg_name': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}) 2026-04-09 00:42:36.199861 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'vg_name': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'}) 2026-04-09 00:42:36.199873 | orchestrator | 2026-04-09 00:42:36.199884 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-09 00:42:36.199895 | orchestrator | Thursday 09 April 2026 00:42:35 +0000 (0:00:00.157) 0:01:04.978 ******** 2026-04-09 00:42:36.199926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.199939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.199951 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.199962 | orchestrator | 2026-04-09 00:42:36.199974 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-09 00:42:36.199985 | orchestrator | Thursday 09 April 2026 00:42:35 +0000 (0:00:00.140) 0:01:05.118 ******** 2026-04-09 00:42:36.199996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.200016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.200028 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.200039 | orchestrator | 2026-04-09 00:42:36.200051 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-09 00:42:36.200062 | orchestrator | Thursday 09 April 2026 00:42:35 +0000 (0:00:00.152) 0:01:05.270 ******** 2026-04-09 00:42:36.200074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'})  2026-04-09 00:42:36.200085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'})  2026-04-09 00:42:36.200097 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:36.200108 | orchestrator | 2026-04-09 00:42:36.200120 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-09 00:42:36.200131 | orchestrator | Thursday 09 April 2026 00:42:36 +0000 (0:00:00.146) 0:01:05.417 ******** 2026-04-09 00:42:36.200143 | orchestrator | ok: [testbed-node-5] => { 2026-04-09 00:42:36.200154 | orchestrator |  "lvm_report": { 2026-04-09 00:42:36.200166 | orchestrator |  "lv": [ 2026-04-09 00:42:36.200178 | orchestrator |  { 2026-04-09 00:42:36.200189 | orchestrator |  "lv_name": "osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54", 2026-04-09 00:42:36.200214 | orchestrator |  "vg_name": "ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54" 2026-04-09 00:42:36.200226 | orchestrator |  }, 2026-04-09 00:42:36.200237 | orchestrator |  { 2026-04-09 00:42:36.200248 | orchestrator |  "lv_name": "osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380", 2026-04-09 00:42:36.200260 | orchestrator |  "vg_name": "ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380" 2026-04-09 00:42:36.200271 | orchestrator |  } 2026-04-09 00:42:36.200283 | orchestrator |  ], 2026-04-09 00:42:36.200294 | orchestrator |  "pv": [ 2026-04-09 00:42:36.200306 | orchestrator |  { 2026-04-09 00:42:36.200317 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-09 00:42:36.200329 | orchestrator |  "vg_name": "ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380" 2026-04-09 00:42:36.200341 | orchestrator |  }, 2026-04-09 00:42:36.200382 | orchestrator |  { 2026-04-09 00:42:36.200395 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-09 00:42:36.200407 | orchestrator |  "vg_name": "ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54" 2026-04-09 00:42:36.200418 | orchestrator |  } 2026-04-09 00:42:36.200430 | orchestrator |  ] 2026-04-09 00:42:36.200441 | orchestrator |  } 2026-04-09 00:42:36.200459 | orchestrator | } 2026-04-09 00:42:36.200478 | orchestrator | 2026-04-09 00:42:36.200496 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:42:36.200513 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:42:36.200531 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:42:36.200549 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-09 00:42:36.200567 | orchestrator | 2026-04-09 00:42:36.200585 | orchestrator | 2026-04-09 00:42:36.200604 | orchestrator | 2026-04-09 00:42:36.200622 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:42:36.200642 | orchestrator | Thursday 09 April 2026 00:42:36 +0000 (0:00:00.136) 0:01:05.554 ******** 2026-04-09 00:42:36.200661 | orchestrator | =============================================================================== 2026-04-09 00:42:36.200680 | orchestrator | Create block VGs -------------------------------------------------------- 5.53s 2026-04-09 00:42:36.200693 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2026-04-09 00:42:36.200704 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-04-09 00:42:36.200716 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.47s 2026-04-09 00:42:36.200727 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.46s 2026-04-09 00:42:36.200739 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.44s 2026-04-09 00:42:36.200750 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.43s 2026-04-09 00:42:36.200761 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2026-04-09 00:42:36.200782 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-04-09 00:42:36.596862 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-09 00:42:36.597762 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.81s 2026-04-09 00:42:36.597802 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2026-04-09 00:42:36.597813 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-04-09 00:42:36.597823 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.67s 2026-04-09 00:42:36.597856 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.64s 2026-04-09 00:42:36.597866 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-04-09 00:42:36.597875 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.64s 2026-04-09 00:42:36.597884 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2026-04-09 00:42:36.597894 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-04-09 00:42:36.597903 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.59s 2026-04-09 00:42:48.163708 | orchestrator | 2026-04-09 00:42:48 | INFO  | Prepare task for execution of facts. 2026-04-09 00:42:48.243468 | orchestrator | 2026-04-09 00:42:48 | INFO  | Task 7eb282ec-42f7-453d-a30f-508dadbb2140 (facts) was prepared for execution. 2026-04-09 00:42:48.243572 | orchestrator | 2026-04-09 00:42:48 | INFO  | It takes a moment until task 7eb282ec-42f7-453d-a30f-508dadbb2140 (facts) has been started and output is visible here. 2026-04-09 00:42:58.296280 | orchestrator | 2026-04-09 00:42:58.296456 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-09 00:42:58.296478 | orchestrator | 2026-04-09 00:42:58.296491 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-09 00:42:58.296502 | orchestrator | Thursday 09 April 2026 00:42:51 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-04-09 00:42:58.296512 | orchestrator | ok: [testbed-manager] 2026-04-09 00:42:58.296524 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:42:58.296534 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:42:58.296544 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:42:58.296554 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:58.296565 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:58.296574 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:58.296584 | orchestrator | 2026-04-09 00:42:58.296595 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-09 00:42:58.296627 | orchestrator | Thursday 09 April 2026 00:42:52 +0000 (0:00:01.126) 0:00:01.372 ******** 2026-04-09 00:42:58.296638 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:42:58.296655 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:42:58.296670 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:42:58.296686 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:42:58.296702 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:58.296718 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:58.296735 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:58.296752 | orchestrator | 2026-04-09 00:42:58.296768 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-09 00:42:58.296784 | orchestrator | 2026-04-09 00:42:58.296795 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-09 00:42:58.296813 | orchestrator | Thursday 09 April 2026 00:42:53 +0000 (0:00:00.869) 0:00:02.242 ******** 2026-04-09 00:42:58.296830 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:42:58.296849 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:42:58.296868 | orchestrator | ok: [testbed-manager] 2026-04-09 00:42:58.296885 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:42:58.296900 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:42:58.296911 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:42:58.296922 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:42:58.296934 | orchestrator | 2026-04-09 00:42:58.296945 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-09 00:42:58.296957 | orchestrator | 2026-04-09 00:42:58.296969 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-09 00:42:58.296986 | orchestrator | Thursday 09 April 2026 00:42:57 +0000 (0:00:04.477) 0:00:06.720 ******** 2026-04-09 00:42:58.297003 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:42:58.297022 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:42:58.297038 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:42:58.297081 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:42:58.297094 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:42:58.297106 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:42:58.297117 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:42:58.297129 | orchestrator | 2026-04-09 00:42:58.297141 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:42:58.297154 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297167 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297179 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297191 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297201 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297211 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297220 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:42:58.297231 | orchestrator | 2026-04-09 00:42:58.297241 | orchestrator | 2026-04-09 00:42:58.297251 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:42:58.297261 | orchestrator | Thursday 09 April 2026 00:42:58 +0000 (0:00:00.454) 0:00:07.175 ******** 2026-04-09 00:42:58.297271 | orchestrator | =============================================================================== 2026-04-09 00:42:58.297281 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.48s 2026-04-09 00:42:58.297293 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-04-09 00:42:58.297316 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.87s 2026-04-09 00:42:58.297334 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-09 00:43:09.530721 | orchestrator | 2026-04-09 00:43:09 | INFO  | Prepare task for execution of frr. 2026-04-09 00:43:09.605963 | orchestrator | 2026-04-09 00:43:09 | INFO  | Task 066c4372-cece-443c-a29c-8712d9117faa (frr) was prepared for execution. 2026-04-09 00:43:09.606136 | orchestrator | 2026-04-09 00:43:09 | INFO  | It takes a moment until task 066c4372-cece-443c-a29c-8712d9117faa (frr) has been started and output is visible here. 2026-04-09 00:43:32.560813 | orchestrator | 2026-04-09 00:43:32.560935 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-09 00:43:32.560956 | orchestrator | 2026-04-09 00:43:32.560965 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-09 00:43:32.560974 | orchestrator | Thursday 09 April 2026 00:43:12 +0000 (0:00:00.269) 0:00:00.270 ******** 2026-04-09 00:43:32.560982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:43:32.560992 | orchestrator | 2026-04-09 00:43:32.560999 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-09 00:43:32.561007 | orchestrator | Thursday 09 April 2026 00:43:12 +0000 (0:00:00.198) 0:00:00.468 ******** 2026-04-09 00:43:32.561015 | orchestrator | changed: [testbed-manager] 2026-04-09 00:43:32.561023 | orchestrator | 2026-04-09 00:43:32.561031 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-09 00:43:32.561039 | orchestrator | Thursday 09 April 2026 00:43:14 +0000 (0:00:01.431) 0:00:01.899 ******** 2026-04-09 00:43:32.561068 | orchestrator | changed: [testbed-manager] 2026-04-09 00:43:32.561077 | orchestrator | 2026-04-09 00:43:32.561088 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-09 00:43:32.561100 | orchestrator | Thursday 09 April 2026 00:43:22 +0000 (0:00:08.303) 0:00:10.203 ******** 2026-04-09 00:43:32.561112 | orchestrator | ok: [testbed-manager] 2026-04-09 00:43:32.561122 | orchestrator | 2026-04-09 00:43:32.561130 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-09 00:43:32.561138 | orchestrator | Thursday 09 April 2026 00:43:23 +0000 (0:00:00.896) 0:00:11.100 ******** 2026-04-09 00:43:32.561146 | orchestrator | changed: [testbed-manager] 2026-04-09 00:43:32.561154 | orchestrator | 2026-04-09 00:43:32.561161 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-09 00:43:32.561169 | orchestrator | Thursday 09 April 2026 00:43:24 +0000 (0:00:00.882) 0:00:11.983 ******** 2026-04-09 00:43:32.561179 | orchestrator | ok: [testbed-manager] 2026-04-09 00:43:32.561192 | orchestrator | 2026-04-09 00:43:32.561205 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-09 00:43:32.561218 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:01.090) 0:00:13.074 ******** 2026-04-09 00:43:32.561231 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:43:32.561243 | orchestrator | 2026-04-09 00:43:32.561255 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-09 00:43:32.561268 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.155) 0:00:13.229 ******** 2026-04-09 00:43:32.561282 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:43:32.561295 | orchestrator | 2026-04-09 00:43:32.561307 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-09 00:43:32.561318 | orchestrator | Thursday 09 April 2026 00:43:25 +0000 (0:00:00.306) 0:00:13.536 ******** 2026-04-09 00:43:32.561331 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:43:32.561344 | orchestrator | 2026-04-09 00:43:32.561360 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-09 00:43:32.561407 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.158) 0:00:13.694 ******** 2026-04-09 00:43:32.561422 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:43:32.561437 | orchestrator | 2026-04-09 00:43:32.561450 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-09 00:43:32.561464 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.137) 0:00:13.831 ******** 2026-04-09 00:43:32.561477 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:43:32.561490 | orchestrator | 2026-04-09 00:43:32.561503 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-09 00:43:32.561512 | orchestrator | Thursday 09 April 2026 00:43:26 +0000 (0:00:00.155) 0:00:13.987 ******** 2026-04-09 00:43:32.561521 | orchestrator | changed: [testbed-manager] 2026-04-09 00:43:32.561530 | orchestrator | 2026-04-09 00:43:32.561538 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-09 00:43:32.561547 | orchestrator | Thursday 09 April 2026 00:43:27 +0000 (0:00:00.977) 0:00:14.964 ******** 2026-04-09 00:43:32.561556 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-09 00:43:32.561565 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-09 00:43:32.561575 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-09 00:43:32.561582 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-09 00:43:32.561590 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-09 00:43:32.561598 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-09 00:43:32.561606 | orchestrator | 2026-04-09 00:43:32.561613 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-09 00:43:32.561630 | orchestrator | Thursday 09 April 2026 00:43:29 +0000 (0:00:02.216) 0:00:17.181 ******** 2026-04-09 00:43:32.561650 | orchestrator | ok: [testbed-manager] 2026-04-09 00:43:32.561658 | orchestrator | 2026-04-09 00:43:32.561665 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-09 00:43:32.561673 | orchestrator | Thursday 09 April 2026 00:43:30 +0000 (0:00:01.281) 0:00:18.462 ******** 2026-04-09 00:43:32.561681 | orchestrator | changed: [testbed-manager] 2026-04-09 00:43:32.561692 | orchestrator | 2026-04-09 00:43:32.561705 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:43:32.561718 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-09 00:43:32.561731 | orchestrator | 2026-04-09 00:43:32.561744 | orchestrator | 2026-04-09 00:43:32.561776 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:43:32.561792 | orchestrator | Thursday 09 April 2026 00:43:32 +0000 (0:00:01.383) 0:00:19.846 ******** 2026-04-09 00:43:32.561804 | orchestrator | =============================================================================== 2026-04-09 00:43:32.561816 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.30s 2026-04-09 00:43:32.561828 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.22s 2026-04-09 00:43:32.561840 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.43s 2026-04-09 00:43:32.561854 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2026-04-09 00:43:32.561867 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.28s 2026-04-09 00:43:32.561880 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.09s 2026-04-09 00:43:32.561892 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.98s 2026-04-09 00:43:32.561906 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.90s 2026-04-09 00:43:32.561919 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.88s 2026-04-09 00:43:32.561932 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.31s 2026-04-09 00:43:32.561944 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-09 00:43:32.561956 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-09 00:43:32.561968 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-09 00:43:32.561976 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-04-09 00:43:32.561983 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-04-09 00:43:32.753327 | orchestrator | 2026-04-09 00:43:32.756836 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 9 00:43:32 UTC 2026 2026-04-09 00:43:32.756943 | orchestrator | 2026-04-09 00:43:33.944844 | orchestrator | 2026-04-09 00:43:33 | INFO  | Collection nutshell is prepared for execution 2026-04-09 00:43:34.063080 | orchestrator | 2026-04-09 00:43:34 | INFO  | A [0] - dotfiles 2026-04-09 00:43:44.162261 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - homer 2026-04-09 00:43:44.163331 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - netdata 2026-04-09 00:43:44.163402 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - openstackclient 2026-04-09 00:43:44.163423 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - phpmyadmin 2026-04-09 00:43:44.163450 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - common 2026-04-09 00:43:44.166558 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- loadbalancer 2026-04-09 00:43:44.166639 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [2] --- opensearch 2026-04-09 00:43:44.166666 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [2] --- mariadb-ng 2026-04-09 00:43:44.166989 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [3] ---- horizon 2026-04-09 00:43:44.167337 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [3] ---- keystone 2026-04-09 00:43:44.167670 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- neutron 2026-04-09 00:43:44.168163 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ wait-for-nova 2026-04-09 00:43:44.168545 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [6] ------- octavia 2026-04-09 00:43:44.170237 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- barbican 2026-04-09 00:43:44.170534 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- designate 2026-04-09 00:43:44.170557 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- ironic 2026-04-09 00:43:44.171048 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- placement 2026-04-09 00:43:44.171081 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- magnum 2026-04-09 00:43:44.173033 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- openvswitch 2026-04-09 00:43:44.173079 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [2] --- ovn 2026-04-09 00:43:44.173573 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- memcached 2026-04-09 00:43:44.173707 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- redis 2026-04-09 00:43:44.173978 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- rabbitmq-ng 2026-04-09 00:43:44.174465 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - kubernetes 2026-04-09 00:43:44.176954 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- kubeconfig 2026-04-09 00:43:44.177108 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- copy-kubeconfig 2026-04-09 00:43:44.177478 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [0] - ceph 2026-04-09 00:43:44.179753 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [1] -- ceph-pools 2026-04-09 00:43:44.179789 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [2] --- copy-ceph-keys 2026-04-09 00:43:44.180020 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [3] ---- cephclient 2026-04-09 00:43:44.180141 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-09 00:43:44.180418 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- wait-for-keystone 2026-04-09 00:43:44.180717 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-09 00:43:44.180938 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ glance 2026-04-09 00:43:44.181069 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ cinder 2026-04-09 00:43:44.181228 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ nova 2026-04-09 00:43:44.181694 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [4] ----- prometheus 2026-04-09 00:43:44.181909 | orchestrator | 2026-04-09 00:43:44 | INFO  | A [5] ------ grafana 2026-04-09 00:43:44.394872 | orchestrator | 2026-04-09 00:43:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-09 00:43:44.394964 | orchestrator | 2026-04-09 00:43:44 | INFO  | Tasks are running in the background 2026-04-09 00:43:46.147567 | orchestrator | 2026-04-09 00:43:46 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-09 00:43:48.336019 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:43:48.338250 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:43:48.339471 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:43:48.341761 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:43:48.342182 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:43:48.345015 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:43:48.345288 | orchestrator | 2026-04-09 00:43:48 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:43:48.345315 | orchestrator | 2026-04-09 00:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:43:51.379551 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:43:51.379714 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:43:51.381790 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:43:51.384832 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:43:51.384963 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:43:51.387136 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:43:51.387164 | orchestrator | 2026-04-09 00:43:51 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:43:51.387174 | orchestrator | 2026-04-09 00:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:43:54.440155 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:43:54.440282 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:43:54.440301 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:43:54.440313 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:43:54.440326 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:43:54.440346 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:43:54.440469 | orchestrator | 2026-04-09 00:43:54 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:43:54.440495 | orchestrator | 2026-04-09 00:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:43:57.831087 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:43:57.834354 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:43:57.836136 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:43:57.836185 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:43:57.838083 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:43:57.838154 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:43:57.838385 | orchestrator | 2026-04-09 00:43:57 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:43:57.838469 | orchestrator | 2026-04-09 00:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:00.892468 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:44:00.892546 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:00.893162 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:00.893553 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:00.894185 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:00.894582 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:00.897995 | orchestrator | 2026-04-09 00:44:00 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:00.898064 | orchestrator | 2026-04-09 00:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:03.948427 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:44:03.950286 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:03.952719 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:03.956387 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:04.022647 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:04.022731 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:04.022744 | orchestrator | 2026-04-09 00:44:03 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:04.022754 | orchestrator | 2026-04-09 00:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:07.070587 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state STARTED 2026-04-09 00:44:07.071168 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:07.075287 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:07.075980 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:07.078272 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:07.079760 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:07.083183 | orchestrator | 2026-04-09 00:44:07 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:07.083250 | orchestrator | 2026-04-09 00:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:10.210005 | orchestrator | 2026-04-09 00:44:10.210151 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-09 00:44:10.210165 | orchestrator | 2026-04-09 00:44:10.210176 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-09 00:44:10.210186 | orchestrator | Thursday 09 April 2026 00:43:52 +0000 (0:00:00.345) 0:00:00.345 ******** 2026-04-09 00:44:10.210222 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:10.210240 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:10.210255 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:10.210269 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:10.210285 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:10.210295 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:10.210304 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:10.210313 | orchestrator | 2026-04-09 00:44:10.210322 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-09 00:44:10.210331 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:05.554) 0:00:05.900 ******** 2026-04-09 00:44:10.210341 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:44:10.210360 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:44:10.210369 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:44:10.210378 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:44:10.210392 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:44:10.210435 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:44:10.210450 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:44:10.210465 | orchestrator | 2026-04-09 00:44:10.210480 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-09 00:44:10.210495 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:03.316) 0:00:09.217 ******** 2026-04-09 00:44:10.210517 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:43:59.987861', 'end': '2026-04-09 00:44:00.007541', 'delta': '0:00:00.019680', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210543 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:43:59.758274', 'end': '2026-04-09 00:43:59.766819', 'delta': '0:00:00.008545', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210556 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:43:59.883240', 'end': '2026-04-09 00:43:59.891716', 'delta': '0:00:00.008476', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210610 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:44:00.054922', 'end': '2026-04-09 00:44:00.061783', 'delta': '0:00:00.006861', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210623 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:43:59.856701', 'end': '2026-04-09 00:44:00.867483', 'delta': '0:00:01.010782', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210634 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:44:01.188775', 'end': '2026-04-09 00:44:01.195493', 'delta': '0:00:00.006718', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210644 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-09 00:44:01.515698', 'end': '2026-04-09 00:44:01.527382', 'delta': '0:00:00.011684', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-09 00:44:10.210655 | orchestrator | 2026-04-09 00:44:10.210666 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-09 00:44:10.210677 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:01.474) 0:00:10.692 ******** 2026-04-09 00:44:10.210687 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:44:10.210698 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:44:10.210709 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:44:10.210719 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:44:10.210729 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:44:10.210745 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:44:10.210756 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:44:10.210766 | orchestrator | 2026-04-09 00:44:10.210776 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-09 00:44:10.210786 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:02.246) 0:00:12.938 ******** 2026-04-09 00:44:10.210797 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-09 00:44:10.210807 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-09 00:44:10.210818 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-09 00:44:10.210829 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-09 00:44:10.210839 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-09 00:44:10.210849 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-09 00:44:10.210859 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-09 00:44:10.210869 | orchestrator | 2026-04-09 00:44:10.210878 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:10.210893 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210908 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210918 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210927 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210936 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210945 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210954 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:10.210963 | orchestrator | 2026-04-09 00:44:10.210972 | orchestrator | 2026-04-09 00:44:10.210981 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:10.210990 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:02.302) 0:00:15.240 ******** 2026-04-09 00:44:10.210999 | orchestrator | =============================================================================== 2026-04-09 00:44:10.211007 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.55s 2026-04-09 00:44:10.211016 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.32s 2026-04-09 00:44:10.211025 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.30s 2026-04-09 00:44:10.211034 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.25s 2026-04-09 00:44:10.211043 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.48s 2026-04-09 00:44:10.211052 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:10.211061 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task c359d931-15d8-44dc-a512-6e4ce786a712 is in state SUCCESS 2026-04-09 00:44:10.211070 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:10.211079 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:10.211088 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:10.211102 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:10.211111 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:10.211120 | orchestrator | 2026-04-09 00:44:10 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:10.211129 | orchestrator | 2026-04-09 00:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:13.338470 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:13.338577 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:13.338591 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:13.338599 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:13.338607 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:13.338615 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:13.338622 | orchestrator | 2026-04-09 00:44:13 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:13.338630 | orchestrator | 2026-04-09 00:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:16.318518 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:16.319096 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:16.319942 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:16.320529 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:16.321342 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:16.322112 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:16.322717 | orchestrator | 2026-04-09 00:44:16 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:16.322862 | orchestrator | 2026-04-09 00:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:19.353203 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:19.355147 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:19.356618 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:19.358400 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:19.359361 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:19.359389 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:19.360365 | orchestrator | 2026-04-09 00:44:19 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:19.362302 | orchestrator | 2026-04-09 00:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:22.406795 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:22.407026 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:22.409166 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:22.411290 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:22.411908 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:22.413151 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:22.415018 | orchestrator | 2026-04-09 00:44:22 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:22.415070 | orchestrator | 2026-04-09 00:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:25.787769 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:25.787859 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:25.787878 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:25.787894 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:25.787910 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:25.787925 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:25.787940 | orchestrator | 2026-04-09 00:44:25 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:25.787955 | orchestrator | 2026-04-09 00:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:28.531009 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:28.531405 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:28.532686 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:28.533585 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:28.534331 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:28.540299 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:28.541377 | orchestrator | 2026-04-09 00:44:28 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state STARTED 2026-04-09 00:44:28.541420 | orchestrator | 2026-04-09 00:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:31.606266 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:31.606343 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:31.606355 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:31.606364 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:31.606396 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:31.606406 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:31.606415 | orchestrator | 2026-04-09 00:44:31 | INFO  | Task 1d729736-582b-4c56-9482-39a44ce6e4ca is in state SUCCESS 2026-04-09 00:44:31.606424 | orchestrator | 2026-04-09 00:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:34.711065 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:34.711174 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:34.711193 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:34.711209 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:34.711224 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:34.711238 | orchestrator | 2026-04-09 00:44:34 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:34.711253 | orchestrator | 2026-04-09 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:37.810276 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:37.810352 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state STARTED 2026-04-09 00:44:37.810361 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:37.810368 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:37.810374 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:37.810380 | orchestrator | 2026-04-09 00:44:37 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:37.810386 | orchestrator | 2026-04-09 00:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:40.869711 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:40.869816 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task 767f3fd6-0d7d-4e86-ac18-64c1841fabc2 is in state SUCCESS 2026-04-09 00:44:40.871200 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:40.872399 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:40.874168 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:40.875810 | orchestrator | 2026-04-09 00:44:40 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:40.875888 | orchestrator | 2026-04-09 00:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:43.920678 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:43.922013 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:43.923903 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:43.925910 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:43.926659 | orchestrator | 2026-04-09 00:44:43 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:43.926687 | orchestrator | 2026-04-09 00:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:46.967496 | orchestrator | 2026-04-09 00:44:46 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:46.969094 | orchestrator | 2026-04-09 00:44:46 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:46.970872 | orchestrator | 2026-04-09 00:44:46 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:46.972726 | orchestrator | 2026-04-09 00:44:46 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:46.974186 | orchestrator | 2026-04-09 00:44:46 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:46.974642 | orchestrator | 2026-04-09 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:50.023869 | orchestrator | 2026-04-09 00:44:50 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:50.025049 | orchestrator | 2026-04-09 00:44:50 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:50.026200 | orchestrator | 2026-04-09 00:44:50 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:50.027857 | orchestrator | 2026-04-09 00:44:50 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:50.029598 | orchestrator | 2026-04-09 00:44:50 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:50.029641 | orchestrator | 2026-04-09 00:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:53.070526 | orchestrator | 2026-04-09 00:44:53 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:53.071542 | orchestrator | 2026-04-09 00:44:53 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state STARTED 2026-04-09 00:44:53.073779 | orchestrator | 2026-04-09 00:44:53 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:53.074595 | orchestrator | 2026-04-09 00:44:53 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:53.077349 | orchestrator | 2026-04-09 00:44:53 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:53.077635 | orchestrator | 2026-04-09 00:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:56.113896 | orchestrator | 2026-04-09 00:44:56 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:56.113997 | orchestrator | 2026-04-09 00:44:56 | INFO  | Task 732196c4-a828-40a5-9dd9-2726ce880890 is in state SUCCESS 2026-04-09 00:44:56.117784 | orchestrator | 2026-04-09 00:44:56.117861 | orchestrator | 2026-04-09 00:44:56.118240 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-09 00:44:56.118263 | orchestrator | 2026-04-09 00:44:56.118273 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-09 00:44:56.118283 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.877) 0:00:00.877 ******** 2026-04-09 00:44:56.118293 | orchestrator | ok: [testbed-manager] => { 2026-04-09 00:44:56.118305 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-09 00:44:56.118316 | orchestrator | } 2026-04-09 00:44:56.118325 | orchestrator | 2026-04-09 00:44:56.118335 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-09 00:44:56.118361 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:00.285) 0:00:01.162 ******** 2026-04-09 00:44:56.118371 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:56.118381 | orchestrator | 2026-04-09 00:44:56.118390 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-09 00:44:56.118400 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:02.629) 0:00:03.792 ******** 2026-04-09 00:44:56.118409 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-09 00:44:56.118420 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-09 00:44:56.118429 | orchestrator | 2026-04-09 00:44:56.118481 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-09 00:44:56.118513 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:01.977) 0:00:05.769 ******** 2026-04-09 00:44:56.118529 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.118542 | orchestrator | 2026-04-09 00:44:56.118555 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-09 00:44:56.118568 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:01.867) 0:00:07.637 ******** 2026-04-09 00:44:56.118580 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.118593 | orchestrator | 2026-04-09 00:44:56.118605 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-09 00:44:56.118619 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:01.406) 0:00:09.043 ******** 2026-04-09 00:44:56.118632 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-09 00:44:56.118646 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:56.118658 | orchestrator | 2026-04-09 00:44:56.118672 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-09 00:44:56.118686 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:26.327) 0:00:35.371 ******** 2026-04-09 00:44:56.118699 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.118713 | orchestrator | 2026-04-09 00:44:56.118723 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:56.118732 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:56.118742 | orchestrator | 2026-04-09 00:44:56.118750 | orchestrator | 2026-04-09 00:44:56.118760 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:56.118773 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:02.637) 0:00:38.008 ******** 2026-04-09 00:44:56.118785 | orchestrator | =============================================================================== 2026-04-09 00:44:56.118796 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.33s 2026-04-09 00:44:56.118808 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.64s 2026-04-09 00:44:56.118822 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.63s 2026-04-09 00:44:56.118836 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.98s 2026-04-09 00:44:56.118849 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.87s 2026-04-09 00:44:56.118861 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.41s 2026-04-09 00:44:56.118869 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.29s 2026-04-09 00:44:56.118878 | orchestrator | 2026-04-09 00:44:56.118886 | orchestrator | 2026-04-09 00:44:56.118894 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-09 00:44:56.118902 | orchestrator | 2026-04-09 00:44:56.118910 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-09 00:44:56.118918 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:01.188) 0:00:01.188 ******** 2026-04-09 00:44:56.118927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-09 00:44:56.118946 | orchestrator | 2026-04-09 00:44:56.118954 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-09 00:44:56.118962 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.377) 0:00:01.566 ******** 2026-04-09 00:44:56.118971 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-09 00:44:56.118979 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-09 00:44:56.118987 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-09 00:44:56.118995 | orchestrator | 2026-04-09 00:44:56.119003 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-09 00:44:56.119012 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:02.578) 0:00:04.144 ******** 2026-04-09 00:44:56.119020 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.119028 | orchestrator | 2026-04-09 00:44:56.119037 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-09 00:44:56.119050 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:03.040) 0:00:07.184 ******** 2026-04-09 00:44:56.119077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-09 00:44:56.119086 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:56.119094 | orchestrator | 2026-04-09 00:44:56.119102 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-09 00:44:56.119110 | orchestrator | Thursday 09 April 2026 00:44:33 +0000 (0:00:32.456) 0:00:39.640 ******** 2026-04-09 00:44:56.119119 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.119127 | orchestrator | 2026-04-09 00:44:56.119135 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-09 00:44:56.119143 | orchestrator | Thursday 09 April 2026 00:44:34 +0000 (0:00:01.067) 0:00:40.707 ******** 2026-04-09 00:44:56.119151 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:56.119160 | orchestrator | 2026-04-09 00:44:56.119168 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-09 00:44:56.119176 | orchestrator | Thursday 09 April 2026 00:44:35 +0000 (0:00:00.985) 0:00:41.693 ******** 2026-04-09 00:44:56.119184 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.119193 | orchestrator | 2026-04-09 00:44:56.119201 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-09 00:44:56.119209 | orchestrator | Thursday 09 April 2026 00:44:37 +0000 (0:00:02.422) 0:00:44.116 ******** 2026-04-09 00:44:56.119217 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.119225 | orchestrator | 2026-04-09 00:44:56.119234 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-09 00:44:56.119242 | orchestrator | Thursday 09 April 2026 00:44:39 +0000 (0:00:01.384) 0:00:45.500 ******** 2026-04-09 00:44:56.119250 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.119258 | orchestrator | 2026-04-09 00:44:56.119266 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-09 00:44:56.119275 | orchestrator | Thursday 09 April 2026 00:44:40 +0000 (0:00:00.801) 0:00:46.302 ******** 2026-04-09 00:44:56.119283 | orchestrator | ok: [testbed-manager] 2026-04-09 00:44:56.119291 | orchestrator | 2026-04-09 00:44:56.119299 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:56.119308 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:44:56.119316 | orchestrator | 2026-04-09 00:44:56.119324 | orchestrator | 2026-04-09 00:44:56.119332 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:56.119340 | orchestrator | Thursday 09 April 2026 00:44:40 +0000 (0:00:00.316) 0:00:46.618 ******** 2026-04-09 00:44:56.119348 | orchestrator | =============================================================================== 2026-04-09 00:44:56.119356 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.46s 2026-04-09 00:44:56.119370 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.04s 2026-04-09 00:44:56.119378 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.58s 2026-04-09 00:44:56.119386 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.42s 2026-04-09 00:44:56.119394 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.38s 2026-04-09 00:44:56.119402 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.07s 2026-04-09 00:44:56.119411 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.99s 2026-04-09 00:44:56.119419 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.80s 2026-04-09 00:44:56.119427 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.38s 2026-04-09 00:44:56.119453 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2026-04-09 00:44:56.119462 | orchestrator | 2026-04-09 00:44:56.119470 | orchestrator | 2026-04-09 00:44:56.119478 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-09 00:44:56.119486 | orchestrator | 2026-04-09 00:44:56.119495 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:44:56.119503 | orchestrator | Thursday 09 April 2026 00:43:47 +0000 (0:00:00.379) 0:00:00.379 ******** 2026-04-09 00:44:56.119511 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:44:56.119520 | orchestrator | 2026-04-09 00:44:56.119528 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-09 00:44:56.119536 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:01.294) 0:00:01.674 ******** 2026-04-09 00:44:56.119544 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119552 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119561 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119569 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119577 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119585 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119593 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119602 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119610 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119622 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119630 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119644 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119653 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119661 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-09 00:44:56.119669 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119677 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119685 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119694 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-09 00:44:56.119702 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119715 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119724 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-09 00:44:56.119732 | orchestrator | 2026-04-09 00:44:56.119740 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-09 00:44:56.119748 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:03.796) 0:00:05.470 ******** 2026-04-09 00:44:56.119757 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:44:56.119766 | orchestrator | 2026-04-09 00:44:56.119774 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-09 00:44:56.119782 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:01.263) 0:00:06.734 ******** 2026-04-09 00:44:56.119793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119823 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.119872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119976 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.119993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.120001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.120010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.120026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.120050 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.120059 | orchestrator | 2026-04-09 00:44:56.120068 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-09 00:44:56.120076 | orchestrator | Thursday 09 April 2026 00:43:59 +0000 (0:00:04.831) 0:00:11.565 ******** 2026-04-09 00:44:56.120085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120103 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120121 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120163 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.120193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120250 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.120270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120279 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.120288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120305 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.120314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120322 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.120331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120339 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.120347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120379 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.120387 | orchestrator | 2026-04-09 00:44:56.120395 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-09 00:44:56.120407 | orchestrator | Thursday 09 April 2026 00:44:02 +0000 (0:00:03.431) 0:00:14.997 ******** 2026-04-09 00:44:56.120430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120488 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.120496 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120552 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.120560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120577 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.120586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120652 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.120660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120669 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.120677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.120686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120695 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.120703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.120726 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.120735 | orchestrator | 2026-04-09 00:44:56.120743 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-09 00:44:56.120751 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:04.986) 0:00:19.983 ******** 2026-04-09 00:44:56.120760 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.120768 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.120776 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.120785 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.120793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.120801 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.120809 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.120818 | orchestrator | 2026-04-09 00:44:56.120826 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-09 00:44:56.120834 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:02.467) 0:00:22.451 ******** 2026-04-09 00:44:56.120842 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.120851 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.120864 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.120882 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.120897 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.120910 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.120938 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.120952 | orchestrator | 2026-04-09 00:44:56.120966 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-09 00:44:56.120978 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:00.584) 0:00:23.035 ******** 2026-04-09 00:44:56.120992 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.121005 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.121018 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.121031 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.121044 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.121058 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.121071 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.121084 | orchestrator | 2026-04-09 00:44:56.121100 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-09 00:44:56.121114 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:01.104) 0:00:24.140 ******** 2026-04-09 00:44:56.121129 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:56.121142 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:56.121155 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:56.121170 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:56.121188 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.121209 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:56.121222 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:56.121235 | orchestrator | 2026-04-09 00:44:56.121250 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-09 00:44:56.121281 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:02.164) 0:00:26.304 ******** 2026-04-09 00:44:56.121298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121339 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.121382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121570 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121666 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.121683 | orchestrator | 2026-04-09 00:44:56.121707 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-09 00:44:56.121723 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:04.247) 0:00:30.552 ******** 2026-04-09 00:44:56.121737 | orchestrator | [WARNING]: Skipped 2026-04-09 00:44:56.121753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-09 00:44:56.121778 | orchestrator | to this access issue: 2026-04-09 00:44:56.121793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-09 00:44:56.121808 | orchestrator | directory 2026-04-09 00:44:56.121824 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:44:56.121838 | orchestrator | 2026-04-09 00:44:56.121854 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-09 00:44:56.121868 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:01.167) 0:00:31.719 ******** 2026-04-09 00:44:56.121883 | orchestrator | [WARNING]: Skipped 2026-04-09 00:44:56.121898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-09 00:44:56.121912 | orchestrator | to this access issue: 2026-04-09 00:44:56.121928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-09 00:44:56.121943 | orchestrator | directory 2026-04-09 00:44:56.121958 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:44:56.121973 | orchestrator | 2026-04-09 00:44:56.121988 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-09 00:44:56.122003 | orchestrator | Thursday 09 April 2026 00:44:20 +0000 (0:00:01.291) 0:00:33.011 ******** 2026-04-09 00:44:56.122081 | orchestrator | [WARNING]: Skipped 2026-04-09 00:44:56.122103 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-09 00:44:56.122119 | orchestrator | to this access issue: 2026-04-09 00:44:56.122135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-09 00:44:56.122149 | orchestrator | directory 2026-04-09 00:44:56.122165 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:44:56.122181 | orchestrator | 2026-04-09 00:44:56.122197 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-09 00:44:56.122213 | orchestrator | Thursday 09 April 2026 00:44:22 +0000 (0:00:01.439) 0:00:34.450 ******** 2026-04-09 00:44:56.122228 | orchestrator | [WARNING]: Skipped 2026-04-09 00:44:56.122244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-09 00:44:56.122260 | orchestrator | to this access issue: 2026-04-09 00:44:56.122275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-09 00:44:56.122289 | orchestrator | directory 2026-04-09 00:44:56.122303 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:44:56.122318 | orchestrator | 2026-04-09 00:44:56.122333 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-09 00:44:56.122347 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:01.202) 0:00:35.652 ******** 2026-04-09 00:44:56.122362 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:56.122376 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.122390 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:56.122405 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:56.122419 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:56.122433 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:56.122474 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:56.122487 | orchestrator | 2026-04-09 00:44:56.122502 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-09 00:44:56.122516 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:04.548) 0:00:40.200 ******** 2026-04-09 00:44:56.122531 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122577 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122603 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122619 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122634 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-09 00:44:56.122648 | orchestrator | 2026-04-09 00:44:56.122664 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-09 00:44:56.122679 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:03.922) 0:00:44.123 ******** 2026-04-09 00:44:56.122694 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:56.122709 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:56.122724 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:56.122740 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:56.122755 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.122771 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:56.122786 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:56.122801 | orchestrator | 2026-04-09 00:44:56.122817 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-09 00:44:56.122831 | orchestrator | Thursday 09 April 2026 00:44:34 +0000 (0:00:02.714) 0:00:46.837 ******** 2026-04-09 00:44:56.122872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.122892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.122909 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.122925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.122941 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.122968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.122985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123017 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.123067 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123083 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123099 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.123173 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123194 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.123239 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.123272 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123298 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123313 | orchestrator | 2026-04-09 00:44:56.123330 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-09 00:44:56.123346 | orchestrator | Thursday 09 April 2026 00:44:37 +0000 (0:00:03.392) 0:00:50.230 ******** 2026-04-09 00:44:56.123362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123378 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123395 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123411 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123468 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123484 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-09 00:44:56.123499 | orchestrator | 2026-04-09 00:44:56.123515 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-09 00:44:56.123531 | orchestrator | Thursday 09 April 2026 00:44:41 +0000 (0:00:03.188) 0:00:53.418 ******** 2026-04-09 00:44:56.123547 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123599 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123622 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123639 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123654 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-09 00:44:56.123670 | orchestrator | 2026-04-09 00:44:56.123685 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-09 00:44:56.123698 | orchestrator | Thursday 09 April 2026 00:44:43 +0000 (0:00:02.428) 0:00:55.847 ******** 2026-04-09 00:44:56.123715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123732 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123861 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-09 00:44:56.123919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.123999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124015 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:44:56.124119 | orchestrator | 2026-04-09 00:44:56.124135 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-09 00:44:56.124150 | orchestrator | Thursday 09 April 2026 00:44:46 +0000 (0:00:03.244) 0:00:59.092 ******** 2026-04-09 00:44:56.124166 | orchestrator | changed: [testbed-manager] => { 2026-04-09 00:44:56.124182 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124197 | orchestrator | } 2026-04-09 00:44:56.124212 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:44:56.124226 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124241 | orchestrator | } 2026-04-09 00:44:56.124256 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:44:56.124272 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124287 | orchestrator | } 2026-04-09 00:44:56.124311 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:44:56.124326 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124339 | orchestrator | } 2026-04-09 00:44:56.124359 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:44:56.124374 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124389 | orchestrator | } 2026-04-09 00:44:56.124404 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:44:56.124472 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124493 | orchestrator | } 2026-04-09 00:44:56.124508 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:44:56.124523 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:44:56.124538 | orchestrator | } 2026-04-09 00:44:56.124552 | orchestrator | 2026-04-09 00:44:56.124567 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:44:56.124582 | orchestrator | Thursday 09 April 2026 00:44:47 +0000 (0:00:00.698) 0:00:59.791 ******** 2026-04-09 00:44:56.124597 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124615 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124630 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124694 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:44:56.124717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124835 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:44:56.124849 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:44:56.124864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.124936 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:44:56.124951 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:44:56.124967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.124984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.125000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.125017 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:44:56.125032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-09 00:44:56.125048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.125074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:44:56.125090 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:44:56.125106 | orchestrator | 2026-04-09 00:44:56.125122 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-09 00:44:56.125136 | orchestrator | Thursday 09 April 2026 00:44:48 +0000 (0:00:01.498) 0:01:01.289 ******** 2026-04-09 00:44:56.125152 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.125167 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:56.125182 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:56.125197 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:56.125219 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:56.125235 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:56.125251 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:56.125266 | orchestrator | 2026-04-09 00:44:56.125290 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-09 00:44:56.125306 | orchestrator | Thursday 09 April 2026 00:44:50 +0000 (0:00:01.619) 0:01:02.908 ******** 2026-04-09 00:44:56.125322 | orchestrator | changed: [testbed-manager] 2026-04-09 00:44:56.125338 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:44:56.125354 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:44:56.125369 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:44:56.125384 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:44:56.125399 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:44:56.125415 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:44:56.125430 | orchestrator | 2026-04-09 00:44:56.125512 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125528 | orchestrator | Thursday 09 April 2026 00:44:51 +0000 (0:00:01.274) 0:01:04.183 ******** 2026-04-09 00:44:56.125544 | orchestrator | 2026-04-09 00:44:56.125559 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125574 | orchestrator | Thursday 09 April 2026 00:44:51 +0000 (0:00:00.063) 0:01:04.246 ******** 2026-04-09 00:44:56.125590 | orchestrator | 2026-04-09 00:44:56.125606 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125622 | orchestrator | Thursday 09 April 2026 00:44:51 +0000 (0:00:00.062) 0:01:04.309 ******** 2026-04-09 00:44:56.125637 | orchestrator | 2026-04-09 00:44:56.125652 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125668 | orchestrator | Thursday 09 April 2026 00:44:51 +0000 (0:00:00.061) 0:01:04.370 ******** 2026-04-09 00:44:56.125682 | orchestrator | 2026-04-09 00:44:56.125698 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125713 | orchestrator | Thursday 09 April 2026 00:44:52 +0000 (0:00:00.062) 0:01:04.432 ******** 2026-04-09 00:44:56.125729 | orchestrator | 2026-04-09 00:44:56.125745 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125760 | orchestrator | Thursday 09 April 2026 00:44:52 +0000 (0:00:00.076) 0:01:04.509 ******** 2026-04-09 00:44:56.125776 | orchestrator | 2026-04-09 00:44:56.125791 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-09 00:44:56.125806 | orchestrator | Thursday 09 April 2026 00:44:52 +0000 (0:00:00.067) 0:01:04.577 ******** 2026-04-09 00:44:56.125822 | orchestrator | 2026-04-09 00:44:56.125836 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-09 00:44:56.125850 | orchestrator | Thursday 09 April 2026 00:44:52 +0000 (0:00:00.106) 0:01:04.683 ******** 2026-04-09 00:44:56.125890 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_g0j5l10w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_g0j5l10w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_g0j5l10w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_g0j5l10w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.125920 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nkclkx43/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nkclkx43/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nkclkx43/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nkclkx43/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.125966 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_knel8jyu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_knel8jyu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_knel8jyu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_knel8jyu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.125983 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1jd05ij4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1jd05ij4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1jd05ij4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1jd05ij4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.126070 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lb2etm8u/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lb2etm8u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_lb2etm8u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lb2etm8u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.126091 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5tqpvm6u/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5tqpvm6u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5tqpvm6u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5tqpvm6u/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.126132 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_l857nndo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_l857nndo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_l857nndo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_l857nndo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Ffluentd: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:44:56.126158 | orchestrator | 2026-04-09 00:44:56.126172 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:44:56.126189 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126205 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126219 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126233 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126248 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126262 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126276 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:44:56.126290 | orchestrator | 2026-04-09 00:44:56.126304 | orchestrator | 2026-04-09 00:44:56.126318 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:44:56.126332 | orchestrator | Thursday 09 April 2026 00:44:54 +0000 (0:00:02.548) 0:01:07.231 ******** 2026-04-09 00:44:56.126346 | orchestrator | =============================================================================== 2026-04-09 00:44:56.126360 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.99s 2026-04-09 00:44:56.126375 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.83s 2026-04-09 00:44:56.126389 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.55s 2026-04-09 00:44:56.126410 | orchestrator | common : Copying over config.json files for services -------------------- 4.25s 2026-04-09 00:44:56.126424 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.92s 2026-04-09 00:44:56.126457 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.80s 2026-04-09 00:44:56.126472 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.43s 2026-04-09 00:44:56.126486 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.39s 2026-04-09 00:44:56.126500 | orchestrator | service-check-containers : common | Check containers -------------------- 3.24s 2026-04-09 00:44:56.126514 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2026-04-09 00:44:56.126528 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.71s 2026-04-09 00:44:56.126542 | orchestrator | common : Restart fluentd container -------------------------------------- 2.55s 2026-04-09 00:44:56.126556 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.47s 2026-04-09 00:44:56.126578 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.43s 2026-04-09 00:44:56.126592 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.16s 2026-04-09 00:44:56.126606 | orchestrator | common : Creating log volume -------------------------------------------- 1.62s 2026-04-09 00:44:56.126619 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.50s 2026-04-09 00:44:56.126632 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.44s 2026-04-09 00:44:56.126645 | orchestrator | common : include_tasks -------------------------------------------------- 1.29s 2026-04-09 00:44:56.126659 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.29s 2026-04-09 00:44:56.126686 | orchestrator | 2026-04-09 00:44:56 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:56.126700 | orchestrator | 2026-04-09 00:44:56 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:56.126713 | orchestrator | 2026-04-09 00:44:56 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:56.126727 | orchestrator | 2026-04-09 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:44:59.166468 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:44:59.166773 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:44:59.167415 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:44:59.168113 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:44:59.168830 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:44:59.169598 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:44:59.172042 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:44:59.172812 | orchestrator | 2026-04-09 00:44:59 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:44:59.172841 | orchestrator | 2026-04-09 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:02.218386 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:45:02.221323 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:02.223063 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:02.224618 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:45:02.226249 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:02.227504 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:02.227804 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:02.228541 | orchestrator | 2026-04-09 00:45:02 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:02.228569 | orchestrator | 2026-04-09 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:05.342388 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:45:05.342550 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:05.342567 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:05.342576 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:45:05.342585 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:05.342594 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:05.342603 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:05.342613 | orchestrator | 2026-04-09 00:45:05 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:05.342622 | orchestrator | 2026-04-09 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:08.302894 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state STARTED 2026-04-09 00:45:08.312435 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:08.317391 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:08.319061 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:45:08.324436 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:08.329236 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:08.332433 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:08.332839 | orchestrator | 2026-04-09 00:45:08 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:08.332991 | orchestrator | 2026-04-09 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:11.380117 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task dee634d3-8404-4862-9428-f4394b1e96ad is in state SUCCESS 2026-04-09 00:45:11.380651 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:11.381565 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:11.382108 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:45:11.383637 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:11.384952 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:11.387970 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:11.389114 | orchestrator | 2026-04-09 00:45:11 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:11.389187 | orchestrator | 2026-04-09 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:14.516357 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:14.516446 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:14.516557 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state STARTED 2026-04-09 00:45:14.516567 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:14.516576 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:14.516583 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:14.516591 | orchestrator | 2026-04-09 00:45:14 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:14.516599 | orchestrator | 2026-04-09 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:17.520942 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:17.521340 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:17.522575 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:17.526246 | orchestrator | 2026-04-09 00:45:17.526317 | orchestrator | 2026-04-09 00:45:17.526334 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-09 00:45:17.526347 | orchestrator | 2026-04-09 00:45:17.526359 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-09 00:45:17.526371 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-09 00:45:17.526383 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:17.526395 | orchestrator | 2026-04-09 00:45:17.526407 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-09 00:45:17.526418 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:01.674) 0:00:01.918 ******** 2026-04-09 00:45:17.526430 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-09 00:45:17.526441 | orchestrator | 2026-04-09 00:45:17.526482 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-09 00:45:17.526501 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:01.022) 0:00:02.941 ******** 2026-04-09 00:45:17.526518 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:17.526530 | orchestrator | 2026-04-09 00:45:17.526541 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-09 00:45:17.526552 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:01.710) 0:00:04.652 ******** 2026-04-09 00:45:17.526564 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-09 00:45:17.526575 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:17.526587 | orchestrator | 2026-04-09 00:45:17.526599 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-09 00:45:17.526612 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:47.277) 0:00:51.929 ******** 2026-04-09 00:45:17.526630 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:17.526648 | orchestrator | 2026-04-09 00:45:17.526666 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:17.526685 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:17.526701 | orchestrator | 2026-04-09 00:45:17.526767 | orchestrator | 2026-04-09 00:45:17.526779 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:17.526790 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:05.454) 0:00:57.384 ******** 2026-04-09 00:45:17.526802 | orchestrator | =============================================================================== 2026-04-09 00:45:17.526813 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.28s 2026-04-09 00:45:17.526856 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 5.45s 2026-04-09 00:45:17.526870 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.71s 2026-04-09 00:45:17.526884 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.67s 2026-04-09 00:45:17.526903 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.02s 2026-04-09 00:45:17.526922 | orchestrator | 2026-04-09 00:45:17.526940 | orchestrator | 2026-04-09 00:45:17.526974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:17.526994 | orchestrator | 2026-04-09 00:45:17.527014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:45:17.527034 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.596) 0:00:00.596 ******** 2026-04-09 00:45:17.527054 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:17.527067 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:17.527081 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:17.527093 | orchestrator | 2026-04-09 00:45:17.527106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:17.527123 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.456) 0:00:01.052 ******** 2026-04-09 00:45:17.527142 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-09 00:45:17.527160 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-09 00:45:17.527178 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-09 00:45:17.527196 | orchestrator | 2026-04-09 00:45:17.527214 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-09 00:45:17.527233 | orchestrator | 2026-04-09 00:45:17.527250 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-09 00:45:17.527268 | orchestrator | Thursday 09 April 2026 00:45:04 +0000 (0:00:00.807) 0:00:01.860 ******** 2026-04-09 00:45:17.527289 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:45:17.527309 | orchestrator | 2026-04-09 00:45:17.527326 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-09 00:45:17.527337 | orchestrator | Thursday 09 April 2026 00:45:05 +0000 (0:00:00.690) 0:00:02.550 ******** 2026-04-09 00:45:17.527348 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:45:17.527360 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:45:17.527371 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:45:17.527383 | orchestrator | 2026-04-09 00:45:17.527394 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-09 00:45:17.527405 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:01.866) 0:00:04.417 ******** 2026-04-09 00:45:17.527416 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-09 00:45:17.527427 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-09 00:45:17.527485 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-09 00:45:17.527502 | orchestrator | 2026-04-09 00:45:17.527514 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-09 00:45:17.527526 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:01.933) 0:00:06.351 ******** 2026-04-09 00:45:17.527562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:45:17.527594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:45:17.527607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:45:17.527618 | orchestrator | 2026-04-09 00:45:17.527630 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-09 00:45:17.527641 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:01.322) 0:00:07.673 ******** 2026-04-09 00:45:17.527653 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:45:17.527664 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:17.527676 | orchestrator | } 2026-04-09 00:45:17.527687 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:45:17.527699 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:17.527710 | orchestrator | } 2026-04-09 00:45:17.527721 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:45:17.527733 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:17.527744 | orchestrator | } 2026-04-09 00:45:17.527755 | orchestrator | 2026-04-09 00:45:17.527766 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:45:17.527778 | orchestrator | Thursday 09 April 2026 00:45:11 +0000 (0:00:00.994) 0:00:08.667 ******** 2026-04-09 00:45:17.527790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:45:17.527802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:17.527830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:45:17.527849 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:17.527861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:45:17.527873 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:17.527885 | orchestrator | 2026-04-09 00:45:17.527905 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-09 00:45:17.527924 | orchestrator | Thursday 09 April 2026 00:45:13 +0000 (0:00:02.288) 0:00:10.956 ******** 2026-04-09 00:45:17.527944 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pwpek762/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pwpek762/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_pwpek762/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pwpek762/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:17.528001 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_0am4wzl1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_0am4wzl1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_0am4wzl1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_0am4wzl1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:17.528054 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_rq37ttai/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_rq37ttai/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_rq37ttai/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_rq37ttai/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmemcached: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:17.528079 | orchestrator | 2026-04-09 00:45:17.528091 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:17.528104 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:17.528117 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:17.528129 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:17.528140 | orchestrator | 2026-04-09 00:45:17.528151 | orchestrator | 2026-04-09 00:45:17.528163 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:17.528174 | orchestrator | Thursday 09 April 2026 00:45:15 +0000 (0:00:02.272) 0:00:13.228 ******** 2026-04-09 00:45:17.528186 | orchestrator | =============================================================================== 2026-04-09 00:45:17.528197 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.29s 2026-04-09 00:45:17.528208 | orchestrator | memcached : Restart memcached container --------------------------------- 2.27s 2026-04-09 00:45:17.528219 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.93s 2026-04-09 00:45:17.528231 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.87s 2026-04-09 00:45:17.528242 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.32s 2026-04-09 00:45:17.528253 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.99s 2026-04-09 00:45:17.528264 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-04-09 00:45:17.528275 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.69s 2026-04-09 00:45:17.528287 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-04-09 00:45:17.528298 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 64777a81-d8f9-46b5-a36c-1e3bed0941b1 is in state SUCCESS 2026-04-09 00:45:17.528310 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:17.528329 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:17.528691 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:17.528731 | orchestrator | 2026-04-09 00:45:17 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:17.528751 | orchestrator | 2026-04-09 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:20.558073 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state STARTED 2026-04-09 00:45:20.558774 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:20.560330 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:20.562218 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:20.564246 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:20.565218 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:20.570172 | orchestrator | 2026-04-09 00:45:20 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:20.570237 | orchestrator | 2026-04-09 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:23.601441 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task b0e71e62-3a2f-444d-8bde-cb8a716b17a6 is in state SUCCESS 2026-04-09 00:45:23.602448 | orchestrator | 2026-04-09 00:45:23.602523 | orchestrator | 2026-04-09 00:45:23.602549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:23.602557 | orchestrator | 2026-04-09 00:45:23.602564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:45:23.602572 | orchestrator | Thursday 09 April 2026 00:45:02 +0000 (0:00:00.837) 0:00:00.837 ******** 2026-04-09 00:45:23.602578 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:23.602586 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:23.602593 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:23.602600 | orchestrator | 2026-04-09 00:45:23.602606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:23.602613 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.619) 0:00:01.464 ******** 2026-04-09 00:45:23.602620 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-09 00:45:23.602627 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-09 00:45:23.602634 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-09 00:45:23.602641 | orchestrator | 2026-04-09 00:45:23.602647 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-09 00:45:23.602654 | orchestrator | 2026-04-09 00:45:23.602661 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-09 00:45:23.602668 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.528) 0:00:01.993 ******** 2026-04-09 00:45:23.602675 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:45:23.602683 | orchestrator | 2026-04-09 00:45:23.602689 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-09 00:45:23.602695 | orchestrator | Thursday 09 April 2026 00:45:05 +0000 (0:00:01.175) 0:00:03.168 ******** 2026-04-09 00:45:23.602704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602794 | orchestrator | 2026-04-09 00:45:23.602800 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-09 00:45:23.602807 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:02.381) 0:00:05.550 ******** 2026-04-09 00:45:23.602814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602871 | orchestrator | 2026-04-09 00:45:23.602878 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-09 00:45:23.602885 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:03.136) 0:00:08.686 ******** 2026-04-09 00:45:23.602892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602944 | orchestrator | 2026-04-09 00:45:23.602950 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-09 00:45:23.602957 | orchestrator | Thursday 09 April 2026 00:45:14 +0000 (0:00:04.268) 0:00:12.955 ******** 2026-04-09 00:45:23.602965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.602994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.603002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.603019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-09 00:45:23.603027 | orchestrator | 2026-04-09 00:45:23.603034 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-09 00:45:23.603042 | orchestrator | Thursday 09 April 2026 00:45:17 +0000 (0:00:02.191) 0:00:15.146 ******** 2026-04-09 00:45:23.603049 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:45:23.603057 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:23.603064 | orchestrator | } 2026-04-09 00:45:23.603072 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:45:23.603079 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:23.603086 | orchestrator | } 2026-04-09 00:45:23.603093 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:45:23.603101 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:23.603108 | orchestrator | } 2026-04-09 00:45:23.603115 | orchestrator | 2026-04-09 00:45:23.603122 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:45:23.603134 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:00.941) 0:00:16.087 ******** 2026-04-09 00:45:23.603142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603159 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:23.603166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603182 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:23.603199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-09 00:45:23.603219 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:23.603227 | orchestrator | 2026-04-09 00:45:23.603233 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:45:23.603240 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:00.606) 0:00:16.694 ******** 2026-04-09 00:45:23.603247 | orchestrator | 2026-04-09 00:45:23.603255 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:45:23.603262 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:00.062) 0:00:16.756 ******** 2026-04-09 00:45:23.603269 | orchestrator | 2026-04-09 00:45:23.603276 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-09 00:45:23.603283 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:00.057) 0:00:16.814 ******** 2026-04-09 00:45:23.603290 | orchestrator | 2026-04-09 00:45:23.603298 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-09 00:45:23.603315 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:00.057) 0:00:16.871 ******** 2026-04-09 00:45:23.603333 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3n_gck7e/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3n_gck7e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3n_gck7e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3n_gck7e/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:23.603343 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ekfe9i3y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ekfe9i3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ekfe9i3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ekfe9i3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:23.603368 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3ea1ot3y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3ea1ot3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3ea1ot3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3ea1ot3y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fredis: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:23.603381 | orchestrator | 2026-04-09 00:45:23.603388 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:23.603395 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:23.603403 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:23.603410 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:23.603416 | orchestrator | 2026-04-09 00:45:23.603423 | orchestrator | 2026-04-09 00:45:23.603430 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:23.603436 | orchestrator | Thursday 09 April 2026 00:45:20 +0000 (0:00:01.807) 0:00:18.679 ******** 2026-04-09 00:45:23.603443 | orchestrator | =============================================================================== 2026-04-09 00:45:23.603450 | orchestrator | redis : Copying over redis config files --------------------------------- 4.27s 2026-04-09 00:45:23.603477 | orchestrator | redis : Copying over default config.json files -------------------------- 3.14s 2026-04-09 00:45:23.603484 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.38s 2026-04-09 00:45:23.603491 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.19s 2026-04-09 00:45:23.603497 | orchestrator | redis : Restart redis container ----------------------------------------- 1.81s 2026-04-09 00:45:23.603504 | orchestrator | redis : include_tasks --------------------------------------------------- 1.18s 2026-04-09 00:45:23.603510 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.94s 2026-04-09 00:45:23.603517 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2026-04-09 00:45:23.603524 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.61s 2026-04-09 00:45:23.603530 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-09 00:45:23.603537 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.18s 2026-04-09 00:45:23.603611 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:23.604611 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:23.609823 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:23.609899 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state STARTED 2026-04-09 00:45:23.612112 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:23.613335 | orchestrator | 2026-04-09 00:45:23 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:23.614285 | orchestrator | 2026-04-09 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:26.659195 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:26.662538 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:26.664784 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:26.666324 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task 5193da0f-48b1-4125-8cdc-d1cc4bc1371d is in state SUCCESS 2026-04-09 00:45:26.666689 | orchestrator | 2026-04-09 00:45:26.666718 | orchestrator | 2026-04-09 00:45:26.666730 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:26.666743 | orchestrator | 2026-04-09 00:45:26.666754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:26.666766 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.955) 0:00:00.955 ******** 2026-04-09 00:45:26.666796 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-09 00:45:26.666809 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-09 00:45:26.666821 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-09 00:45:26.666832 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-09 00:45:26.666843 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-09 00:45:26.666854 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-09 00:45:26.666866 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-09 00:45:26.666877 | orchestrator | 2026-04-09 00:45:26.666889 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-09 00:45:26.666901 | orchestrator | 2026-04-09 00:45:26.666922 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-09 00:45:26.666933 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:01.949) 0:00:02.904 ******** 2026-04-09 00:45:26.666948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:45:26.666974 | orchestrator | 2026-04-09 00:45:26.666985 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-09 00:45:26.666998 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:01.506) 0:00:04.410 ******** 2026-04-09 00:45:26.667009 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:26.667022 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:26.667034 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:26.667045 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:26.667057 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:26.667068 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:26.667079 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:26.667091 | orchestrator | 2026-04-09 00:45:26.667102 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-09 00:45:26.667113 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:02.874) 0:00:07.285 ******** 2026-04-09 00:45:26.667125 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:26.667137 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:26.667148 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:26.667159 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:26.667170 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:26.667181 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:26.667193 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:26.667204 | orchestrator | 2026-04-09 00:45:26.667215 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-09 00:45:26.667227 | orchestrator | Thursday 09 April 2026 00:44:03 +0000 (0:00:03.011) 0:00:10.297 ******** 2026-04-09 00:45:26.667239 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.667250 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:26.667261 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:26.667273 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:26.667284 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:26.667296 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:26.667308 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:26.667337 | orchestrator | 2026-04-09 00:45:26.667349 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-09 00:45:26.667360 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:02.380) 0:00:12.678 ******** 2026-04-09 00:45:26.667372 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:26.667383 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:26.667394 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:26.667406 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:26.667417 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:26.667429 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:26.667440 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.667452 | orchestrator | 2026-04-09 00:45:26.667485 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-09 00:45:26.667496 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:10.062) 0:00:22.740 ******** 2026-04-09 00:45:26.667508 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:26.667520 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:26.667531 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:26.667542 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:26.667553 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:26.667564 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:26.667574 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.667584 | orchestrator | 2026-04-09 00:45:26.667595 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-09 00:45:26.667607 | orchestrator | Thursday 09 April 2026 00:44:58 +0000 (0:00:43.166) 0:01:05.907 ******** 2026-04-09 00:45:26.667619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:45:26.667632 | orchestrator | 2026-04-09 00:45:26.667644 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-09 00:45:26.667655 | orchestrator | Thursday 09 April 2026 00:44:59 +0000 (0:00:01.088) 0:01:06.995 ******** 2026-04-09 00:45:26.667664 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-09 00:45:26.667676 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-09 00:45:26.667686 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-09 00:45:26.667698 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-09 00:45:26.667721 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-09 00:45:26.667732 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-09 00:45:26.667744 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-09 00:45:26.667755 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-09 00:45:26.667766 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-09 00:45:26.667784 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-09 00:45:26.667794 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-09 00:45:26.667805 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-09 00:45:26.667816 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-09 00:45:26.667827 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-09 00:45:26.667838 | orchestrator | 2026-04-09 00:45:26.667849 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-09 00:45:26.667861 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:03.757) 0:01:10.753 ******** 2026-04-09 00:45:26.667873 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:26.667884 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:26.667895 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:26.667906 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:26.667917 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:26.667928 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:26.667939 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:26.667962 | orchestrator | 2026-04-09 00:45:26.667973 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-09 00:45:26.667984 | orchestrator | Thursday 09 April 2026 00:45:04 +0000 (0:00:01.190) 0:01:11.943 ******** 2026-04-09 00:45:26.667995 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:26.668007 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:26.668018 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:26.668029 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:26.668040 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.668051 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:26.668063 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:26.668074 | orchestrator | 2026-04-09 00:45:26.668085 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-09 00:45:26.668097 | orchestrator | Thursday 09 April 2026 00:45:06 +0000 (0:00:01.362) 0:01:13.306 ******** 2026-04-09 00:45:26.668108 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:26.668118 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:26.668130 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:26.668141 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:26.668152 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:26.668163 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:26.668174 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:26.668185 | orchestrator | 2026-04-09 00:45:26.668196 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-09 00:45:26.668208 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:01.684) 0:01:14.991 ******** 2026-04-09 00:45:26.668219 | orchestrator | ok: [testbed-manager] 2026-04-09 00:45:26.668230 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:26.668241 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:26.668252 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:26.668263 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:26.668274 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:26.668285 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:26.668296 | orchestrator | 2026-04-09 00:45:26.668307 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-09 00:45:26.668319 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:02.262) 0:01:17.253 ******** 2026-04-09 00:45:26.668331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-09 00:45:26.668346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:45:26.668358 | orchestrator | 2026-04-09 00:45:26.668369 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-09 00:45:26.668381 | orchestrator | Thursday 09 April 2026 00:45:11 +0000 (0:00:01.751) 0:01:19.005 ******** 2026-04-09 00:45:26.668392 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.668403 | orchestrator | 2026-04-09 00:45:26.668412 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-09 00:45:26.668423 | orchestrator | Thursday 09 April 2026 00:45:14 +0000 (0:00:02.097) 0:01:21.102 ******** 2026-04-09 00:45:26.668433 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:26.668443 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:26.668482 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:26.668495 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:45:26.668505 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:45:26.668512 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:45:26.668519 | orchestrator | changed: [testbed-manager] 2026-04-09 00:45:26.668526 | orchestrator | 2026-04-09 00:45:26.668533 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:26.668540 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668557 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668564 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668571 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668586 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668594 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668607 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:45:26.668614 | orchestrator | 2026-04-09 00:45:26.668621 | orchestrator | 2026-04-09 00:45:26.668628 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:26.668635 | orchestrator | Thursday 09 April 2026 00:45:25 +0000 (0:00:11.251) 0:01:32.354 ******** 2026-04-09 00:45:26.668642 | orchestrator | =============================================================================== 2026-04-09 00:45:26.668650 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.17s 2026-04-09 00:45:26.668657 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.25s 2026-04-09 00:45:26.668663 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.06s 2026-04-09 00:45:26.668670 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.76s 2026-04-09 00:45:26.668677 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.01s 2026-04-09 00:45:26.668683 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.87s 2026-04-09 00:45:26.668690 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.38s 2026-04-09 00:45:26.668696 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.26s 2026-04-09 00:45:26.668702 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.10s 2026-04-09 00:45:26.668709 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2026-04-09 00:45:26.668715 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.75s 2026-04-09 00:45:26.668722 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.68s 2026-04-09 00:45:26.668728 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.51s 2026-04-09 00:45:26.668735 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.36s 2026-04-09 00:45:26.668741 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.19s 2026-04-09 00:45:26.668748 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.09s 2026-04-09 00:45:26.669090 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:26.672271 | orchestrator | 2026-04-09 00:45:26 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:26.672308 | orchestrator | 2026-04-09 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:29.745766 | orchestrator | 2026-04-09 00:45:29 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:29.745846 | orchestrator | 2026-04-09 00:45:29 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state STARTED 2026-04-09 00:45:29.746237 | orchestrator | 2026-04-09 00:45:29 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:29.747255 | orchestrator | 2026-04-09 00:45:29 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:29.748888 | orchestrator | 2026-04-09 00:45:29 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:29.748990 | orchestrator | 2026-04-09 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:32.790950 | orchestrator | 2026-04-09 00:45:32 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:32.791832 | orchestrator | 2026-04-09 00:45:32 | INFO  | Task 9e6e909e-5744-4b55-af35-e67a3ed81be6 is in state SUCCESS 2026-04-09 00:45:32.792013 | orchestrator | 2026-04-09 00:45:32.793410 | orchestrator | 2026-04-09 00:45:32.793521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:32.793547 | orchestrator | 2026-04-09 00:45:32.793560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:45:32.793578 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.836) 0:00:00.836 ******** 2026-04-09 00:45:32.793591 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:32.793602 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:32.793616 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:32.793630 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:32.793643 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:32.793661 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:32.793674 | orchestrator | 2026-04-09 00:45:32.793688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:32.793703 | orchestrator | Thursday 09 April 2026 00:45:04 +0000 (0:00:01.229) 0:00:02.066 ******** 2026-04-09 00:45:32.793720 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.793736 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.793751 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.793765 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.793983 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.793999 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-09 00:45:32.794006 | orchestrator | 2026-04-09 00:45:32.794012 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-09 00:45:32.794065 | orchestrator | 2026-04-09 00:45:32.794082 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-09 00:45:32.794087 | orchestrator | Thursday 09 April 2026 00:45:06 +0000 (0:00:01.844) 0:00:03.911 ******** 2026-04-09 00:45:32.794092 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:45:32.794098 | orchestrator | 2026-04-09 00:45:32.794102 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:45:32.794106 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:01.097) 0:00:05.009 ******** 2026-04-09 00:45:32.794111 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:45:32.794115 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:45:32.794119 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:45:32.794123 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:45:32.794144 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:45:32.794148 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:45:32.794163 | orchestrator | 2026-04-09 00:45:32.794168 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:45:32.794172 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:01.826) 0:00:06.835 ******** 2026-04-09 00:45:32.794191 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-09 00:45:32.794195 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-09 00:45:32.794199 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-09 00:45:32.794203 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-09 00:45:32.794207 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-09 00:45:32.794211 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-09 00:45:32.794215 | orchestrator | 2026-04-09 00:45:32.794221 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:45:32.794228 | orchestrator | Thursday 09 April 2026 00:45:11 +0000 (0:00:02.259) 0:00:09.095 ******** 2026-04-09 00:45:32.794234 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-09 00:45:32.794240 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:32.794247 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-09 00:45:32.794253 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-09 00:45:32.794259 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:32.794265 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-09 00:45:32.794271 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:32.794277 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-09 00:45:32.794284 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:45:32.794291 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:45:32.794296 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-09 00:45:32.794302 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:45:32.794308 | orchestrator | 2026-04-09 00:45:32.794314 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-09 00:45:32.794320 | orchestrator | Thursday 09 April 2026 00:45:13 +0000 (0:00:01.435) 0:00:10.530 ******** 2026-04-09 00:45:32.794326 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:32.794332 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:32.794337 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:32.794351 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:45:32.794365 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:45:32.794380 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:45:32.794394 | orchestrator | 2026-04-09 00:45:32.794406 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-09 00:45:32.794420 | orchestrator | Thursday 09 April 2026 00:45:13 +0000 (0:00:00.691) 0:00:11.221 ******** 2026-04-09 00:45:32.794478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794825 | orchestrator | 2026-04-09 00:45:32.794840 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-09 00:45:32.794854 | orchestrator | Thursday 09 April 2026 00:45:15 +0000 (0:00:02.088) 0:00:13.310 ******** 2026-04-09 00:45:32.794870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.794991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795052 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795076 | orchestrator | 2026-04-09 00:45:32.795082 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-09 00:45:32.795089 | orchestrator | Thursday 09 April 2026 00:45:19 +0000 (0:00:03.600) 0:00:16.911 ******** 2026-04-09 00:45:32.795095 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:32.795110 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:32.795122 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:32.795129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:45:32.795135 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:45:32.795142 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:45:32.795149 | orchestrator | 2026-04-09 00:45:32.795170 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-09 00:45:32.795182 | orchestrator | Thursday 09 April 2026 00:45:20 +0000 (0:00:01.250) 0:00:18.161 ******** 2026-04-09 00:45:32.795200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-09 00:45:32.795447 | orchestrator | 2026-04-09 00:45:32.795476 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-09 00:45:32.795483 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:02.704) 0:00:20.866 ******** 2026-04-09 00:45:32.795490 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:45:32.795498 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795504 | orchestrator | } 2026-04-09 00:45:32.795511 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:45:32.795517 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795527 | orchestrator | } 2026-04-09 00:45:32.795544 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:45:32.795557 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795570 | orchestrator | } 2026-04-09 00:45:32.795578 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:45:32.795585 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795591 | orchestrator | } 2026-04-09 00:45:32.795598 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:45:32.795604 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795610 | orchestrator | } 2026-04-09 00:45:32.795617 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:45:32.795624 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:32.795631 | orchestrator | } 2026-04-09 00:45:32.795642 | orchestrator | 2026-04-09 00:45:32.795653 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:45:32.795668 | orchestrator | Thursday 09 April 2026 00:45:24 +0000 (0:00:00.807) 0:00:21.673 ******** 2026-04-09 00:45:32.795684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.795754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.795762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.795776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.795787 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:32.795806 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:32.795822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.795838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.795887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.795915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.795934 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:45:32.795949 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:32.795956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.795968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.795977 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:45:32.795991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release//openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-09 00:45:32.796005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release//openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-09 00:45:32.796031 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:45:32.796045 | orchestrator | 2026-04-09 00:45:32.796064 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796081 | orchestrator | Thursday 09 April 2026 00:45:27 +0000 (0:00:03.431) 0:00:25.105 ******** 2026-04-09 00:45:32.796093 | orchestrator | 2026-04-09 00:45:32.796109 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796128 | orchestrator | Thursday 09 April 2026 00:45:28 +0000 (0:00:00.560) 0:00:25.665 ******** 2026-04-09 00:45:32.796134 | orchestrator | 2026-04-09 00:45:32.796139 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796145 | orchestrator | Thursday 09 April 2026 00:45:28 +0000 (0:00:00.375) 0:00:26.041 ******** 2026-04-09 00:45:32.796151 | orchestrator | 2026-04-09 00:45:32.796157 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796163 | orchestrator | Thursday 09 April 2026 00:45:28 +0000 (0:00:00.289) 0:00:26.330 ******** 2026-04-09 00:45:32.796169 | orchestrator | 2026-04-09 00:45:32.796184 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796190 | orchestrator | Thursday 09 April 2026 00:45:29 +0000 (0:00:00.174) 0:00:26.505 ******** 2026-04-09 00:45:32.796197 | orchestrator | 2026-04-09 00:45:32.796203 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-09 00:45:32.796209 | orchestrator | Thursday 09 April 2026 00:45:29 +0000 (0:00:00.142) 0:00:26.648 ******** 2026-04-09 00:45:32.796215 | orchestrator | 2026-04-09 00:45:32.796221 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-09 00:45:32.796234 | orchestrator | Thursday 09 April 2026 00:45:29 +0000 (0:00:00.159) 0:00:26.808 ******** 2026-04-09 00:45:32.796297 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_h3rezn55/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_h3rezn55/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_h3rezn55/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_h3rezn55/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796361 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lj3s25t8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lj3s25t8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_lj3s25t8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lj3s25t8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796381 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_y05y4o7l/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_y05y4o7l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_y05y4o7l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_y05y4o7l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796425 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_g2qj3q0m/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_g2qj3q0m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_g2qj3q0m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_g2qj3q0m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796543 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__154v9ji/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__154v9ji/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__154v9ji/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__154v9ji/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796563 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1uovw651/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1uovw651/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_1uovw651/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1uovw651/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopenvswitch-db-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:32.796586 | orchestrator | 2026-04-09 00:45:32.796595 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:32.796613 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796631 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796645 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796658 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796673 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796689 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:45:32.796703 | orchestrator | 2026-04-09 00:45:32.796719 | orchestrator | 2026-04-09 00:45:32.796749 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:32.796768 | orchestrator | Thursday 09 April 2026 00:45:32 +0000 (0:00:02.996) 0:00:29.805 ******** 2026-04-09 00:45:32.796782 | orchestrator | =============================================================================== 2026-04-09 00:45:32.796798 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.60s 2026-04-09 00:45:32.796806 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.43s 2026-04-09 00:45:32.796812 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 3.00s 2026-04-09 00:45:32.796818 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.70s 2026-04-09 00:45:32.796826 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.26s 2026-04-09 00:45:32.796855 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.09s 2026-04-09 00:45:32.796862 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.84s 2026-04-09 00:45:32.796869 | orchestrator | module-load : Load modules ---------------------------------------------- 1.83s 2026-04-09 00:45:32.796876 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.70s 2026-04-09 00:45:32.796883 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.44s 2026-04-09 00:45:32.796889 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.25s 2026-04-09 00:45:32.796896 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.23s 2026-04-09 00:45:32.796923 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.10s 2026-04-09 00:45:32.796930 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.81s 2026-04-09 00:45:32.796937 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.69s 2026-04-09 00:45:32.796947 | orchestrator | 2026-04-09 00:45:32 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:32.796963 | orchestrator | 2026-04-09 00:45:32 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:32.796974 | orchestrator | 2026-04-09 00:45:32 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:32.796989 | orchestrator | 2026-04-09 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:35.835203 | orchestrator | 2026-04-09 00:45:35 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state STARTED 2026-04-09 00:45:35.838978 | orchestrator | 2026-04-09 00:45:35 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:35.839640 | orchestrator | 2026-04-09 00:45:35 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:35.841732 | orchestrator | 2026-04-09 00:45:35 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:35.843038 | orchestrator | 2026-04-09 00:45:35 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:35.844618 | orchestrator | 2026-04-09 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:38.874351 | orchestrator | 2026-04-09 00:45:38 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state STARTED 2026-04-09 00:45:38.874715 | orchestrator | 2026-04-09 00:45:38 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:38.875349 | orchestrator | 2026-04-09 00:45:38 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:38.876247 | orchestrator | 2026-04-09 00:45:38 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:38.876914 | orchestrator | 2026-04-09 00:45:38 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:38.876998 | orchestrator | 2026-04-09 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:41.905755 | orchestrator | 2026-04-09 00:45:41 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state STARTED 2026-04-09 00:45:41.906160 | orchestrator | 2026-04-09 00:45:41 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:41.907001 | orchestrator | 2026-04-09 00:45:41 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:41.908172 | orchestrator | 2026-04-09 00:45:41 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:41.908535 | orchestrator | 2026-04-09 00:45:41 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:41.908658 | orchestrator | 2026-04-09 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:44.935374 | orchestrator | 2026-04-09 00:45:44 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state STARTED 2026-04-09 00:45:44.935576 | orchestrator | 2026-04-09 00:45:44 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:44.935603 | orchestrator | 2026-04-09 00:45:44 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:44.935613 | orchestrator | 2026-04-09 00:45:44 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:44.937028 | orchestrator | 2026-04-09 00:45:44 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:44.937091 | orchestrator | 2026-04-09 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:47.962098 | orchestrator | 2026-04-09 00:45:47 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state STARTED 2026-04-09 00:45:47.963055 | orchestrator | 2026-04-09 00:45:47 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:47.963831 | orchestrator | 2026-04-09 00:45:47 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:47.964681 | orchestrator | 2026-04-09 00:45:47 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:47.965282 | orchestrator | 2026-04-09 00:45:47 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:47.965328 | orchestrator | 2026-04-09 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:50.999725 | orchestrator | 2026-04-09 00:45:50.999819 | orchestrator | 2026-04-09 00:45:50 | INFO  | Task d42468ed-0830-4408-99ec-0173c2499f28 is in state SUCCESS 2026-04-09 00:45:51.000574 | orchestrator | 2026-04-09 00:45:51.000626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:51.000643 | orchestrator | 2026-04-09 00:45:51.000658 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:45:51.000674 | orchestrator | Thursday 09 April 2026 00:45:36 +0000 (0:00:00.377) 0:00:00.377 ******** 2026-04-09 00:45:51.000688 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:51.000705 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:51.000720 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:51.000735 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:45:51.000761 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:45:51.000771 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:45:51.000780 | orchestrator | 2026-04-09 00:45:51.000789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:51.000798 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:00.568) 0:00:00.946 ******** 2026-04-09 00:45:51.000808 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-09 00:45:51.000818 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-09 00:45:51.000828 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-09 00:45:51.000837 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-09 00:45:51.000846 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-09 00:45:51.000855 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-09 00:45:51.000864 | orchestrator | 2026-04-09 00:45:51.000873 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-09 00:45:51.000882 | orchestrator | 2026-04-09 00:45:51.000892 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-09 00:45:51.000901 | orchestrator | Thursday 09 April 2026 00:45:38 +0000 (0:00:01.230) 0:00:02.177 ******** 2026-04-09 00:45:51.000911 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:45:51.000922 | orchestrator | 2026-04-09 00:45:51.000932 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-09 00:45:51.000941 | orchestrator | Thursday 09 April 2026 00:45:40 +0000 (0:00:01.320) 0:00:03.497 ******** 2026-04-09 00:45:51.000953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.000991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.001002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.001011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.001021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.001047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.001056 | orchestrator | 2026-04-09 00:45:51.001110 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-09 00:45:51.001124 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:01.481) 0:00:04.979 ******** 2026-04-09 00:45:51.002101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002259 | orchestrator | 2026-04-09 00:45:51.002275 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-09 00:45:51.002290 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:01.564) 0:00:06.544 ******** 2026-04-09 00:45:51.002306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002438 | orchestrator | 2026-04-09 00:45:51.002453 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-09 00:45:51.002556 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:01.093) 0:00:07.638 ******** 2026-04-09 00:45:51.002577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002682 | orchestrator | 2026-04-09 00:45:51.002697 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-09 00:45:51.002712 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:01.771) 0:00:09.409 ******** 2026-04-09 00:45:51.002734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:45:51.002843 | orchestrator | 2026-04-09 00:45:51.002858 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-09 00:45:51.002873 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:01.535) 0:00:10.944 ******** 2026-04-09 00:45:51.002888 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:45:51.002905 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.002920 | orchestrator | } 2026-04-09 00:45:51.002936 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:45:51.002951 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.002966 | orchestrator | } 2026-04-09 00:45:51.002981 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:45:51.002996 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.003011 | orchestrator | } 2026-04-09 00:45:51.003026 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:45:51.003041 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.003055 | orchestrator | } 2026-04-09 00:45:51.003070 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:45:51.003086 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.003100 | orchestrator | } 2026-04-09 00:45:51.003115 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:45:51.003130 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:51.003146 | orchestrator | } 2026-04-09 00:45:51.003161 | orchestrator | 2026-04-09 00:45:51.003177 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:45:51.003199 | orchestrator | Thursday 09 April 2026 00:45:48 +0000 (0:00:00.613) 0:00:11.557 ******** 2026-04-09 00:45:51.003223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003239 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:51.003261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003276 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:51.003292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003307 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:51.003321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003353 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:45:51.003371 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:45:51.003387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:45:51.003441 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:45:51.003458 | orchestrator | 2026-04-09 00:45:51.003524 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-09 00:45:51.003541 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:01.205) 0:00:12.763 ******** 2026-04-09 00:45:51.003557 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003574 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003589 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003605 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003631 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003647 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:45:51.003664 | orchestrator | 2026-04-09 00:45:51.003681 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:51.003708 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003726 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003742 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003758 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003781 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003798 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-09 00:45:51.003813 | orchestrator | 2026-04-09 00:45:51.003827 | orchestrator | 2026-04-09 00:45:51.003842 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:51.003856 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:01.059) 0:00:13.822 ******** 2026-04-09 00:45:51.003871 | orchestrator | =============================================================================== 2026-04-09 00:45:51.003885 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.77s 2026-04-09 00:45:51.003900 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.56s 2026-04-09 00:45:51.003914 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.54s 2026-04-09 00:45:51.003928 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.48s 2026-04-09 00:45:51.003944 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.32s 2026-04-09 00:45:51.003959 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.23s 2026-04-09 00:45:51.003975 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.21s 2026-04-09 00:45:51.003990 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.09s 2026-04-09 00:45:51.004005 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.06s 2026-04-09 00:45:51.004020 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.61s 2026-04-09 00:45:51.004036 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2026-04-09 00:45:51.004051 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:51.004067 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:51.004277 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:51.004296 | orchestrator | 2026-04-09 00:45:51 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:51.004306 | orchestrator | 2026-04-09 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:54.030250 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state STARTED 2026-04-09 00:45:54.031377 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:54.032414 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:54.033622 | orchestrator | 2026-04-09 00:45:54 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:54.034115 | orchestrator | 2026-04-09 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:45:57.073303 | orchestrator | 2026-04-09 00:45:57.073382 | orchestrator | 2026-04-09 00:45:57.073569 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-09 00:45:57.073582 | orchestrator | 2026-04-09 00:45:57.073589 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:45:57.073597 | orchestrator | Thursday 09 April 2026 00:45:21 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-04-09 00:45:57.073603 | orchestrator | ok: [localhost] => { 2026-04-09 00:45:57.073611 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-09 00:45:57.073618 | orchestrator | } 2026-04-09 00:45:57.073625 | orchestrator | 2026-04-09 00:45:57.073631 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-09 00:45:57.073638 | orchestrator | Thursday 09 April 2026 00:45:21 +0000 (0:00:00.107) 0:00:00.375 ******** 2026-04-09 00:45:57.073645 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-09 00:45:57.073653 | orchestrator | ...ignoring 2026-04-09 00:45:57.073659 | orchestrator | 2026-04-09 00:45:57.073665 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-09 00:45:57.073672 | orchestrator | Thursday 09 April 2026 00:45:24 +0000 (0:00:03.519) 0:00:03.895 ******** 2026-04-09 00:45:57.073678 | orchestrator | skipping: [localhost] 2026-04-09 00:45:57.073684 | orchestrator | 2026-04-09 00:45:57.073690 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-09 00:45:57.073697 | orchestrator | Thursday 09 April 2026 00:45:24 +0000 (0:00:00.059) 0:00:03.954 ******** 2026-04-09 00:45:57.073703 | orchestrator | ok: [localhost] 2026-04-09 00:45:57.073709 | orchestrator | 2026-04-09 00:45:57.073716 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:45:57.073722 | orchestrator | 2026-04-09 00:45:57.073728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:45:57.073734 | orchestrator | Thursday 09 April 2026 00:45:25 +0000 (0:00:00.262) 0:00:04.217 ******** 2026-04-09 00:45:57.073740 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:57.073747 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:45:57.073753 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:45:57.073759 | orchestrator | 2026-04-09 00:45:57.073778 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:45:57.073785 | orchestrator | Thursday 09 April 2026 00:45:25 +0000 (0:00:00.790) 0:00:05.008 ******** 2026-04-09 00:45:57.073791 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-09 00:45:57.073797 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-09 00:45:57.073803 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-09 00:45:57.073810 | orchestrator | 2026-04-09 00:45:57.073816 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-09 00:45:57.073822 | orchestrator | 2026-04-09 00:45:57.073828 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:45:57.073834 | orchestrator | Thursday 09 April 2026 00:45:26 +0000 (0:00:00.905) 0:00:05.913 ******** 2026-04-09 00:45:57.073841 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:45:57.073866 | orchestrator | 2026-04-09 00:45:57.073872 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:45:57.073878 | orchestrator | Thursday 09 April 2026 00:45:28 +0000 (0:00:01.524) 0:00:07.438 ******** 2026-04-09 00:45:57.073885 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:57.073891 | orchestrator | 2026-04-09 00:45:57.073897 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-09 00:45:57.073903 | orchestrator | Thursday 09 April 2026 00:45:30 +0000 (0:00:01.963) 0:00:09.401 ******** 2026-04-09 00:45:57.073909 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.073916 | orchestrator | 2026-04-09 00:45:57.073923 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-09 00:45:57.073929 | orchestrator | Thursday 09 April 2026 00:45:30 +0000 (0:00:00.591) 0:00:09.993 ******** 2026-04-09 00:45:57.073935 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.073941 | orchestrator | 2026-04-09 00:45:57.073947 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-09 00:45:57.073954 | orchestrator | Thursday 09 April 2026 00:45:31 +0000 (0:00:00.275) 0:00:10.268 ******** 2026-04-09 00:45:57.073960 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.073966 | orchestrator | 2026-04-09 00:45:57.073972 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-09 00:45:57.073978 | orchestrator | Thursday 09 April 2026 00:45:31 +0000 (0:00:00.538) 0:00:10.807 ******** 2026-04-09 00:45:57.073984 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.073990 | orchestrator | 2026-04-09 00:45:57.073997 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:45:57.074003 | orchestrator | Thursday 09 April 2026 00:45:32 +0000 (0:00:00.398) 0:00:11.206 ******** 2026-04-09 00:45:57.074009 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:45:57.074058 | orchestrator | 2026-04-09 00:45:57.074065 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-09 00:45:57.074071 | orchestrator | Thursday 09 April 2026 00:45:32 +0000 (0:00:00.759) 0:00:11.965 ******** 2026-04-09 00:45:57.074077 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:45:57.074084 | orchestrator | 2026-04-09 00:45:57.074090 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-09 00:45:57.074096 | orchestrator | Thursday 09 April 2026 00:45:33 +0000 (0:00:00.780) 0:00:12.746 ******** 2026-04-09 00:45:57.074102 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.074108 | orchestrator | 2026-04-09 00:45:57.074114 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-09 00:45:57.074121 | orchestrator | Thursday 09 April 2026 00:45:34 +0000 (0:00:00.644) 0:00:13.391 ******** 2026-04-09 00:45:57.074127 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.074133 | orchestrator | 2026-04-09 00:45:57.074151 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-09 00:45:57.074157 | orchestrator | Thursday 09 April 2026 00:45:34 +0000 (0:00:00.330) 0:00:13.721 ******** 2026-04-09 00:45:57.074167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074220 | orchestrator | 2026-04-09 00:45:57.074234 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-09 00:45:57.074243 | orchestrator | Thursday 09 April 2026 00:45:35 +0000 (0:00:01.269) 0:00:14.991 ******** 2026-04-09 00:45:57.074263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074305 | orchestrator | 2026-04-09 00:45:57.074314 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-09 00:45:57.074324 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:01.592) 0:00:16.584 ******** 2026-04-09 00:45:57.074333 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:45:57.074344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:45:57.074354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-09 00:45:57.074364 | orchestrator | 2026-04-09 00:45:57.074373 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-09 00:45:57.074380 | orchestrator | Thursday 09 April 2026 00:45:39 +0000 (0:00:01.788) 0:00:18.372 ******** 2026-04-09 00:45:57.074386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:45:57.074392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:45:57.074398 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-09 00:45:57.074404 | orchestrator | 2026-04-09 00:45:57.074411 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-09 00:45:57.074417 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:02.285) 0:00:20.658 ******** 2026-04-09 00:45:57.074423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:45:57.074429 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:45:57.074435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-09 00:45:57.074442 | orchestrator | 2026-04-09 00:45:57.074452 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-09 00:45:57.074459 | orchestrator | Thursday 09 April 2026 00:45:42 +0000 (0:00:01.368) 0:00:22.027 ******** 2026-04-09 00:45:57.074465 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:45:57.074493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:45:57.074501 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-09 00:45:57.074507 | orchestrator | 2026-04-09 00:45:57.074513 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-09 00:45:57.074519 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:01.445) 0:00:23.473 ******** 2026-04-09 00:45:57.074526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:45:57.074532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:45:57.074538 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-09 00:45:57.074544 | orchestrator | 2026-04-09 00:45:57.074550 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-09 00:45:57.074556 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:01.530) 0:00:25.003 ******** 2026-04-09 00:45:57.074563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:45:57.074569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:45:57.074575 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-09 00:45:57.074581 | orchestrator | 2026-04-09 00:45:57.074587 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-09 00:45:57.074593 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:01.472) 0:00:26.475 ******** 2026-04-09 00:45:57.074603 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:45:57.074609 | orchestrator | 2026-04-09 00:45:57.074616 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-09 00:45:57.074622 | orchestrator | Thursday 09 April 2026 00:45:47 +0000 (0:00:00.606) 0:00:27.082 ******** 2026-04-09 00:45:57.074629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074662 | orchestrator | 2026-04-09 00:45:57.074668 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-09 00:45:57.074674 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:01.142) 0:00:28.225 ******** 2026-04-09 00:45:57.074685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074692 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.074698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074705 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:57.074717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074728 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:57.074735 | orchestrator | 2026-04-09 00:45:57.074741 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-09 00:45:57.074747 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:00.364) 0:00:28.589 ******** 2026-04-09 00:45:57.074754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074771 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.074786 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:57.074793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074812 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:57.074818 | orchestrator | 2026-04-09 00:45:57.074824 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-09 00:45:57.074831 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.691) 0:00:29.281 ******** 2026-04-09 00:45:57.074842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:45:57.074876 | orchestrator | 2026-04-09 00:45:57.074882 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-09 00:45:57.074888 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:01.143) 0:00:30.425 ******** 2026-04-09 00:45:57.074895 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:45:57.074901 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:57.074907 | orchestrator | } 2026-04-09 00:45:57.074914 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:45:57.074920 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:57.074927 | orchestrator | } 2026-04-09 00:45:57.074933 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:45:57.074939 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:45:57.074945 | orchestrator | } 2026-04-09 00:45:57.074951 | orchestrator | 2026-04-09 00:45:57.074958 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:45:57.074964 | orchestrator | Thursday 09 April 2026 00:45:51 +0000 (0:00:00.273) 0:00:30.698 ******** 2026-04-09 00:45:57.074975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.074993 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:45:57.074999 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:45:57.075005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:45:57.075016 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:45:57.075022 | orchestrator | 2026-04-09 00:45:57.075029 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-09 00:45:57.075035 | orchestrator | Thursday 09 April 2026 00:45:52 +0000 (0:00:00.623) 0:00:31.322 ******** 2026-04-09 00:45:57.075041 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:45:57.075047 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:45:57.075054 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:45:57.075060 | orchestrator | 2026-04-09 00:45:57.075066 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-09 00:45:57.075073 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:00.853) 0:00:32.175 ******** 2026-04-09 00:45:57.075084 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4le49s_p/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4le49s_p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4le49s_p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:57.075095 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wt97ocds/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wt97ocds/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wt97ocds/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:57.075165 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lnv7eiw5/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lnv7eiw5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lnv7eiw5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Frabbitmq: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:45:57.075181 | orchestrator | 2026-04-09 00:45:57.075192 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:45:57.075198 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:45:57.075205 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-09 00:45:57.075212 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-09 00:45:57.075218 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-09 00:45:57.075224 | orchestrator | 2026-04-09 00:45:57.075230 | orchestrator | 2026-04-09 00:45:57.075236 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:45:57.075243 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:01.058) 0:00:33.233 ******** 2026-04-09 00:45:57.075249 | orchestrator | =============================================================================== 2026-04-09 00:45:57.075255 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.52s 2026-04-09 00:45:57.075261 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.29s 2026-04-09 00:45:57.075267 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.96s 2026-04-09 00:45:57.075273 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.79s 2026-04-09 00:45:57.075280 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2026-04-09 00:45:57.075286 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.53s 2026-04-09 00:45:57.075292 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.52s 2026-04-09 00:45:57.075298 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.47s 2026-04-09 00:45:57.075304 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.45s 2026-04-09 00:45:57.075310 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-04-09 00:45:57.075317 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.27s 2026-04-09 00:45:57.075325 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.14s 2026-04-09 00:45:57.075335 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.14s 2026-04-09 00:45:57.075349 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 1.06s 2026-04-09 00:45:57.075361 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-04-09 00:45:57.075378 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2026-04-09 00:45:57.075387 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2026-04-09 00:45:57.075398 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.78s 2026-04-09 00:45:57.075407 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.76s 2026-04-09 00:45:57.075416 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.69s 2026-04-09 00:45:57.075426 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task a693353d-4e6f-4847-b0f1-fdb0765aa009 is in state SUCCESS 2026-04-09 00:45:57.075435 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:45:57.075446 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:45:57.075456 | orchestrator | 2026-04-09 00:45:57 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:45:57.075466 | orchestrator | 2026-04-09 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:00.105822 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:00.107618 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:00.108947 | orchestrator | 2026-04-09 00:46:00 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:00.109001 | orchestrator | 2026-04-09 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:03.158832 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:03.159274 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:03.160945 | orchestrator | 2026-04-09 00:46:03 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:03.160960 | orchestrator | 2026-04-09 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:06.201002 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:06.201862 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:06.203689 | orchestrator | 2026-04-09 00:46:06 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:06.203839 | orchestrator | 2026-04-09 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:09.243130 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:09.244759 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:09.245968 | orchestrator | 2026-04-09 00:46:09 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:09.246297 | orchestrator | 2026-04-09 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:12.281196 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:12.281276 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:12.282125 | orchestrator | 2026-04-09 00:46:12 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:12.282181 | orchestrator | 2026-04-09 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:15.312610 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:15.313409 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:15.315238 | orchestrator | 2026-04-09 00:46:15 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:15.315296 | orchestrator | 2026-04-09 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:18.352142 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:18.353795 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:18.354965 | orchestrator | 2026-04-09 00:46:18 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:18.354994 | orchestrator | 2026-04-09 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:21.391218 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:21.391910 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:21.391962 | orchestrator | 2026-04-09 00:46:21 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:21.391971 | orchestrator | 2026-04-09 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:24.424641 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:24.424722 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:24.425106 | orchestrator | 2026-04-09 00:46:24 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:24.425134 | orchestrator | 2026-04-09 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:27.463995 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:27.464926 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:27.465695 | orchestrator | 2026-04-09 00:46:27 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:27.465811 | orchestrator | 2026-04-09 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:30.501308 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:30.501895 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:30.506742 | orchestrator | 2026-04-09 00:46:30 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:30.508288 | orchestrator | 2026-04-09 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:33.542240 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:33.545292 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:33.548410 | orchestrator | 2026-04-09 00:46:33 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:33.548459 | orchestrator | 2026-04-09 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:36.590767 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:36.592621 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:36.599470 | orchestrator | 2026-04-09 00:46:36 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:36.605004 | orchestrator | 2026-04-09 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:39.641541 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:39.641630 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:39.642144 | orchestrator | 2026-04-09 00:46:39 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:39.642177 | orchestrator | 2026-04-09 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:42.691085 | orchestrator | 2026-04-09 00:46:42 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:42.694288 | orchestrator | 2026-04-09 00:46:42 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:42.695170 | orchestrator | 2026-04-09 00:46:42 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:42.695218 | orchestrator | 2026-04-09 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:45.736815 | orchestrator | 2026-04-09 00:46:45 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:45.738672 | orchestrator | 2026-04-09 00:46:45 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:45.740670 | orchestrator | 2026-04-09 00:46:45 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:45.740718 | orchestrator | 2026-04-09 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:48.785113 | orchestrator | 2026-04-09 00:46:48 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:48.790235 | orchestrator | 2026-04-09 00:46:48 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:48.790294 | orchestrator | 2026-04-09 00:46:48 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:48.791162 | orchestrator | 2026-04-09 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:51.810707 | orchestrator | 2026-04-09 00:46:51 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:51.810865 | orchestrator | 2026-04-09 00:46:51 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:51.810894 | orchestrator | 2026-04-09 00:46:51 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:51.810994 | orchestrator | 2026-04-09 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:54.836337 | orchestrator | 2026-04-09 00:46:54 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:54.836409 | orchestrator | 2026-04-09 00:46:54 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:54.837337 | orchestrator | 2026-04-09 00:46:54 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:54.837636 | orchestrator | 2026-04-09 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:46:57.901931 | orchestrator | 2026-04-09 00:46:57 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:46:57.902078 | orchestrator | 2026-04-09 00:46:57 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:46:57.902091 | orchestrator | 2026-04-09 00:46:57 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:46:57.902102 | orchestrator | 2026-04-09 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:00.906476 | orchestrator | 2026-04-09 00:47:00 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:00.910131 | orchestrator | 2026-04-09 00:47:00 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:00.912996 | orchestrator | 2026-04-09 00:47:00 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:00.913051 | orchestrator | 2026-04-09 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:03.944617 | orchestrator | 2026-04-09 00:47:03 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:03.946936 | orchestrator | 2026-04-09 00:47:03 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:03.947701 | orchestrator | 2026-04-09 00:47:03 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:03.947725 | orchestrator | 2026-04-09 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:07.003531 | orchestrator | 2026-04-09 00:47:06 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:07.003593 | orchestrator | 2026-04-09 00:47:06 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:07.003602 | orchestrator | 2026-04-09 00:47:06 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:07.003609 | orchestrator | 2026-04-09 00:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:10.150181 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:10.150236 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:10.150242 | orchestrator | 2026-04-09 00:47:10 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:10.150247 | orchestrator | 2026-04-09 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:13.194442 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:13.197560 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:13.199475 | orchestrator | 2026-04-09 00:47:13 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:13.199818 | orchestrator | 2026-04-09 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:16.249747 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:16.250553 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:16.252935 | orchestrator | 2026-04-09 00:47:16 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:16.252985 | orchestrator | 2026-04-09 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:19.344804 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:19.344875 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:19.344882 | orchestrator | 2026-04-09 00:47:19 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:19.344887 | orchestrator | 2026-04-09 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:22.348870 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:22.350637 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:22.352349 | orchestrator | 2026-04-09 00:47:22 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:22.352389 | orchestrator | 2026-04-09 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:25.414869 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:25.417174 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:25.419957 | orchestrator | 2026-04-09 00:47:25 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:25.420134 | orchestrator | 2026-04-09 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:28.467166 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:28.469019 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:28.471158 | orchestrator | 2026-04-09 00:47:28 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:28.471239 | orchestrator | 2026-04-09 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:31.510894 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:31.511872 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:31.513989 | orchestrator | 2026-04-09 00:47:31 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:31.514047 | orchestrator | 2026-04-09 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:34.558409 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:34.560123 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:34.561908 | orchestrator | 2026-04-09 00:47:34 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:34.561987 | orchestrator | 2026-04-09 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:37.619094 | orchestrator | 2026-04-09 00:47:37 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:37.620478 | orchestrator | 2026-04-09 00:47:37 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:37.623113 | orchestrator | 2026-04-09 00:47:37 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:37.623187 | orchestrator | 2026-04-09 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:40.666312 | orchestrator | 2026-04-09 00:47:40 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:40.667617 | orchestrator | 2026-04-09 00:47:40 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:40.667698 | orchestrator | 2026-04-09 00:47:40 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:40.667711 | orchestrator | 2026-04-09 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:43.723408 | orchestrator | 2026-04-09 00:47:43 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:43.723787 | orchestrator | 2026-04-09 00:47:43 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:43.724984 | orchestrator | 2026-04-09 00:47:43 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:43.725007 | orchestrator | 2026-04-09 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:46.777169 | orchestrator | 2026-04-09 00:47:46 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:46.777217 | orchestrator | 2026-04-09 00:47:46 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:46.778294 | orchestrator | 2026-04-09 00:47:46 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:46.778343 | orchestrator | 2026-04-09 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:49.824107 | orchestrator | 2026-04-09 00:47:49 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:49.824595 | orchestrator | 2026-04-09 00:47:49 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:49.825516 | orchestrator | 2026-04-09 00:47:49 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:49.825545 | orchestrator | 2026-04-09 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:52.882233 | orchestrator | 2026-04-09 00:47:52 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:52.882857 | orchestrator | 2026-04-09 00:47:52 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:52.886303 | orchestrator | 2026-04-09 00:47:52 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:52.886390 | orchestrator | 2026-04-09 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:55.919752 | orchestrator | 2026-04-09 00:47:55 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:55.920653 | orchestrator | 2026-04-09 00:47:55 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:55.921732 | orchestrator | 2026-04-09 00:47:55 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:55.921771 | orchestrator | 2026-04-09 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:47:58.969250 | orchestrator | 2026-04-09 00:47:58 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:47:58.970734 | orchestrator | 2026-04-09 00:47:58 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:47:58.972453 | orchestrator | 2026-04-09 00:47:58 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:47:58.972593 | orchestrator | 2026-04-09 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:02.016133 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:02.017004 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:02.018255 | orchestrator | 2026-04-09 00:48:02 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:02.018313 | orchestrator | 2026-04-09 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:05.075836 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:05.076846 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:05.076948 | orchestrator | 2026-04-09 00:48:05 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:05.076960 | orchestrator | 2026-04-09 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:08.179309 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:08.180047 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:08.181597 | orchestrator | 2026-04-09 00:48:08 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:08.181642 | orchestrator | 2026-04-09 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:11.219331 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:11.220720 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:11.222927 | orchestrator | 2026-04-09 00:48:11 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:11.222979 | orchestrator | 2026-04-09 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:14.274789 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:14.276675 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:14.280777 | orchestrator | 2026-04-09 00:48:14 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:14.280843 | orchestrator | 2026-04-09 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:17.326763 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:17.328375 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:17.330319 | orchestrator | 2026-04-09 00:48:17 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:17.330653 | orchestrator | 2026-04-09 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:20.387823 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:20.387895 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:20.387904 | orchestrator | 2026-04-09 00:48:20 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:20.387910 | orchestrator | 2026-04-09 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:23.522556 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:23.522680 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:23.522689 | orchestrator | 2026-04-09 00:48:23 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:23.522695 | orchestrator | 2026-04-09 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:26.508315 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:26.508372 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:26.508378 | orchestrator | 2026-04-09 00:48:26 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:26.508384 | orchestrator | 2026-04-09 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:29.498261 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:29.498924 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:29.499884 | orchestrator | 2026-04-09 00:48:29 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:29.500001 | orchestrator | 2026-04-09 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:32.524943 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state STARTED 2026-04-09 00:48:32.525040 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:32.525909 | orchestrator | 2026-04-09 00:48:32 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:32.525930 | orchestrator | 2026-04-09 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:35.563331 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task ec3ba8c3-da39-403a-b81f-b45faf847c3f is in state STARTED 2026-04-09 00:48:35.564438 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 5f170de4-201a-47c7-8fbf-6400a6b1abe8 is in state SUCCESS 2026-04-09 00:48:35.566118 | orchestrator | 2026-04-09 00:48:35.566177 | orchestrator | 2026-04-09 00:48:35.566216 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-09 00:48:35.566229 | orchestrator | 2026-04-09 00:48:35.566241 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-09 00:48:35.566253 | orchestrator | Thursday 09 April 2026 00:43:48 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-04-09 00:48:35.566265 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.566277 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.566289 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.566300 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.566311 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.566323 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.566334 | orchestrator | 2026-04-09 00:48:35.566345 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-09 00:48:35.566357 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.606) 0:00:00.903 ******** 2026-04-09 00:48:35.566368 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.566380 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.566392 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.566403 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.566414 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.566425 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.566437 | orchestrator | 2026-04-09 00:48:35.566448 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-09 00:48:35.566459 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.752) 0:00:01.655 ******** 2026-04-09 00:48:35.566540 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.566563 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.566581 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.566599 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.566616 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.566634 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.566653 | orchestrator | 2026-04-09 00:48:35.566673 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-09 00:48:35.566692 | orchestrator | Thursday 09 April 2026 00:43:50 +0000 (0:00:00.638) 0:00:02.293 ******** 2026-04-09 00:48:35.566712 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.566732 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.566752 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.566772 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.566791 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.566812 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.566831 | orchestrator | 2026-04-09 00:48:35.566850 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-09 00:48:35.566885 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:02.681) 0:00:04.975 ******** 2026-04-09 00:48:35.566905 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.566925 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.566944 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.566962 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.567003 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.567016 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.567027 | orchestrator | 2026-04-09 00:48:35.567039 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-09 00:48:35.567050 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:02.060) 0:00:07.036 ******** 2026-04-09 00:48:35.567061 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.567073 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.567084 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.567095 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.567106 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.567117 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.567128 | orchestrator | 2026-04-09 00:48:35.567140 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-09 00:48:35.567151 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:01.483) 0:00:08.519 ******** 2026-04-09 00:48:35.567162 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.567188 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.567210 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.567221 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.567232 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.567243 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.567255 | orchestrator | 2026-04-09 00:48:35.567266 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-09 00:48:35.567278 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:00.761) 0:00:09.281 ******** 2026-04-09 00:48:35.567290 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.567301 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.567312 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.567324 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.567335 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.567346 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.567357 | orchestrator | 2026-04-09 00:48:35.567368 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-09 00:48:35.567380 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.744) 0:00:10.025 ******** 2026-04-09 00:48:35.567391 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567402 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567413 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.567425 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567436 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567498 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.567511 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567522 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567533 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.567572 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567602 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567614 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.567626 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567637 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567648 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.567659 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-09 00:48:35.567670 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-09 00:48:35.567681 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.567701 | orchestrator | 2026-04-09 00:48:35.567712 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-09 00:48:35.567723 | orchestrator | Thursday 09 April 2026 00:43:59 +0000 (0:00:00.875) 0:00:10.901 ******** 2026-04-09 00:48:35.567735 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.567746 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.567757 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.567768 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.567779 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.567791 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.567802 | orchestrator | 2026-04-09 00:48:35.567813 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-09 00:48:35.567825 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:01.529) 0:00:12.430 ******** 2026-04-09 00:48:35.567837 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.567848 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.567859 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.567870 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.567881 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.567892 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.567903 | orchestrator | 2026-04-09 00:48:35.567914 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-09 00:48:35.567926 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:01.005) 0:00:13.435 ******** 2026-04-09 00:48:35.567937 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.567948 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.567959 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.567970 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.567981 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.567993 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.568004 | orchestrator | 2026-04-09 00:48:35.568015 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-09 00:48:35.568033 | orchestrator | Thursday 09 April 2026 00:44:08 +0000 (0:00:06.705) 0:00:20.141 ******** 2026-04-09 00:48:35.568044 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.568055 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.568067 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568078 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.568090 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.568101 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.568112 | orchestrator | 2026-04-09 00:48:35.568123 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-09 00:48:35.568134 | orchestrator | Thursday 09 April 2026 00:44:09 +0000 (0:00:01.321) 0:00:21.462 ******** 2026-04-09 00:48:35.568146 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568157 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.568168 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.568179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.568191 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.568221 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.568232 | orchestrator | 2026-04-09 00:48:35.568245 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-09 00:48:35.568257 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:01.806) 0:00:23.269 ******** 2026-04-09 00:48:35.568268 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568279 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.568291 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.568301 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.568312 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.568324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.568335 | orchestrator | 2026-04-09 00:48:35.568346 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-09 00:48:35.568364 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:01.306) 0:00:24.575 ******** 2026-04-09 00:48:35.568376 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-09 00:48:35.568387 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-09 00:48:35.568399 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568410 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-09 00:48:35.568421 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-09 00:48:35.568433 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.568444 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-09 00:48:35.568455 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-09 00:48:35.568466 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.568558 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-09 00:48:35.568580 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-09 00:48:35.568600 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.568615 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-09 00:48:35.568639 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-09 00:48:35.568668 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.568687 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-09 00:48:35.568707 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-09 00:48:35.568725 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.568743 | orchestrator | 2026-04-09 00:48:35.568762 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-09 00:48:35.568793 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.795) 0:00:25.370 ******** 2026-04-09 00:48:35.568813 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568833 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.568850 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.568867 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.568885 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.568903 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.568921 | orchestrator | 2026-04-09 00:48:35.568939 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-09 00:48:35.568958 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.840) 0:00:26.211 ******** 2026-04-09 00:48:35.568977 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.568997 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.569015 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.569034 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.569052 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.569070 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.569090 | orchestrator | 2026-04-09 00:48:35.569109 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-09 00:48:35.569128 | orchestrator | 2026-04-09 00:48:35.569166 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-09 00:48:35.569187 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:01.306) 0:00:27.518 ******** 2026-04-09 00:48:35.569205 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.569225 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.569254 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.569275 | orchestrator | 2026-04-09 00:48:35.569293 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-09 00:48:35.569313 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:01.321) 0:00:28.840 ******** 2026-04-09 00:48:35.569332 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.569352 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.569372 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.569390 | orchestrator | 2026-04-09 00:48:35.569407 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-09 00:48:35.569442 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:01.135) 0:00:29.975 ******** 2026-04-09 00:48:35.569462 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.569507 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.569529 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.569548 | orchestrator | 2026-04-09 00:48:35.569568 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-09 00:48:35.569587 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:01.175) 0:00:31.151 ******** 2026-04-09 00:48:35.569608 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.569630 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.569642 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.569653 | orchestrator | 2026-04-09 00:48:35.569665 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-09 00:48:35.569677 | orchestrator | Thursday 09 April 2026 00:44:20 +0000 (0:00:01.455) 0:00:32.607 ******** 2026-04-09 00:48:35.569688 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.569699 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.569711 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.569722 | orchestrator | 2026-04-09 00:48:35.569733 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-09 00:48:35.569745 | orchestrator | Thursday 09 April 2026 00:44:21 +0000 (0:00:00.676) 0:00:33.283 ******** 2026-04-09 00:48:35.569756 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.569769 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.569780 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.569791 | orchestrator | 2026-04-09 00:48:35.569802 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-09 00:48:35.569814 | orchestrator | Thursday 09 April 2026 00:44:22 +0000 (0:00:01.334) 0:00:34.618 ******** 2026-04-09 00:48:35.569825 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.569836 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.569847 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.569858 | orchestrator | 2026-04-09 00:48:35.569870 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-09 00:48:35.569881 | orchestrator | Thursday 09 April 2026 00:44:24 +0000 (0:00:01.477) 0:00:36.095 ******** 2026-04-09 00:48:35.569893 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:35.569904 | orchestrator | 2026-04-09 00:48:35.569916 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-09 00:48:35.569931 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:00.916) 0:00:37.012 ******** 2026-04-09 00:48:35.569954 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.569982 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.569999 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.570079 | orchestrator | 2026-04-09 00:48:35.570106 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-09 00:48:35.570127 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:03.214) 0:00:40.227 ******** 2026-04-09 00:48:35.570147 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.570166 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.570186 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.570198 | orchestrator | 2026-04-09 00:48:35.570210 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-09 00:48:35.570221 | orchestrator | Thursday 09 April 2026 00:44:29 +0000 (0:00:01.227) 0:00:41.455 ******** 2026-04-09 00:48:35.570233 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.570244 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.570255 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.570266 | orchestrator | 2026-04-09 00:48:35.570278 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-09 00:48:35.570289 | orchestrator | Thursday 09 April 2026 00:44:31 +0000 (0:00:01.849) 0:00:43.304 ******** 2026-04-09 00:48:35.570301 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.570324 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.570337 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.570348 | orchestrator | 2026-04-09 00:48:35.570360 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-09 00:48:35.570385 | orchestrator | Thursday 09 April 2026 00:44:32 +0000 (0:00:01.494) 0:00:44.799 ******** 2026-04-09 00:48:35.570398 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.570410 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.570429 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.570459 | orchestrator | 2026-04-09 00:48:35.570505 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-09 00:48:35.570526 | orchestrator | Thursday 09 April 2026 00:44:33 +0000 (0:00:00.487) 0:00:45.286 ******** 2026-04-09 00:48:35.570545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.570564 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.570583 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.570603 | orchestrator | 2026-04-09 00:48:35.570622 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-09 00:48:35.570643 | orchestrator | Thursday 09 April 2026 00:44:33 +0000 (0:00:00.405) 0:00:45.691 ******** 2026-04-09 00:48:35.570664 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.570683 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.570703 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.570722 | orchestrator | 2026-04-09 00:48:35.570742 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-09 00:48:35.570762 | orchestrator | Thursday 09 April 2026 00:44:35 +0000 (0:00:01.966) 0:00:47.658 ******** 2026-04-09 00:48:35.570780 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.570800 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.570820 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.570840 | orchestrator | 2026-04-09 00:48:35.570854 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-09 00:48:35.570866 | orchestrator | Thursday 09 April 2026 00:44:38 +0000 (0:00:02.291) 0:00:49.950 ******** 2026-04-09 00:48:35.570877 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.570889 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.570900 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.570911 | orchestrator | 2026-04-09 00:48:35.570923 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-09 00:48:35.570934 | orchestrator | Thursday 09 April 2026 00:44:38 +0000 (0:00:00.593) 0:00:50.543 ******** 2026-04-09 00:48:35.570947 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:48:35.570959 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:48:35.570980 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-09 00:48:35.570992 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:48:35.571004 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:48:35.571015 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-09 00:48:35.571026 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:48:35.571037 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:48:35.571048 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-09 00:48:35.571069 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:48:35.571081 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:48:35.571092 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-09 00:48:35.571104 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:48:35.571121 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:48:35.571139 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-09 00:48:35.571168 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-04-09 00:48:35.571190 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-04-09 00:48:35.571208 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-04-09 00:48:35.571226 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.571244 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.571274 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.571294 | orchestrator | 2026-04-09 00:48:35.571313 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-09 00:48:35.571334 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:01:04.773) 0:01:55.317 ******** 2026-04-09 00:48:35.571354 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.571373 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.571393 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.571412 | orchestrator | 2026-04-09 00:48:35.571431 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-09 00:48:35.571449 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:00.285) 0:01:55.603 ******** 2026-04-09 00:48:35.571469 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.571553 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.571575 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.571588 | orchestrator | 2026-04-09 00:48:35.571600 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-09 00:48:35.571612 | orchestrator | Thursday 09 April 2026 00:45:44 +0000 (0:00:01.128) 0:01:56.731 ******** 2026-04-09 00:48:35.571623 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.571634 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.571645 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.571657 | orchestrator | 2026-04-09 00:48:35.571668 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-09 00:48:35.571680 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:01.209) 0:01:57.940 ******** 2026-04-09 00:48:35.571691 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.571706 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.571732 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.571757 | orchestrator | 2026-04-09 00:48:35.571776 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-09 00:48:35.571794 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:27.052) 0:02:24.993 ******** 2026-04-09 00:48:35.571814 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.571832 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.571852 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.571873 | orchestrator | 2026-04-09 00:48:35.571912 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-09 00:48:35.571932 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:00.729) 0:02:25.723 ******** 2026-04-09 00:48:35.571951 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.571971 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.571989 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.572008 | orchestrator | 2026-04-09 00:48:35.572021 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-09 00:48:35.572038 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:00.842) 0:02:26.565 ******** 2026-04-09 00:48:35.572049 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.572059 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.572069 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.572080 | orchestrator | 2026-04-09 00:48:35.572090 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-09 00:48:35.572101 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.655) 0:02:27.221 ******** 2026-04-09 00:48:35.572111 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.572121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.572131 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.572141 | orchestrator | 2026-04-09 00:48:35.572151 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-09 00:48:35.572162 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:00.735) 0:02:27.956 ******** 2026-04-09 00:48:35.572172 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.572182 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.572192 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.572202 | orchestrator | 2026-04-09 00:48:35.572213 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-09 00:48:35.572223 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:00.340) 0:02:28.297 ******** 2026-04-09 00:48:35.572240 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.572255 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.572280 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.572299 | orchestrator | 2026-04-09 00:48:35.572316 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-09 00:48:35.572332 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:00.627) 0:02:28.924 ******** 2026-04-09 00:48:35.572347 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.572364 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.572381 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.572398 | orchestrator | 2026-04-09 00:48:35.572409 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-09 00:48:35.572420 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.955) 0:02:29.880 ******** 2026-04-09 00:48:35.572430 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.572440 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.572450 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.572460 | orchestrator | 2026-04-09 00:48:35.572470 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-09 00:48:35.572510 | orchestrator | Thursday 09 April 2026 00:46:18 +0000 (0:00:00.778) 0:02:30.658 ******** 2026-04-09 00:48:35.572526 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:48:35.572541 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:48:35.572552 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:48:35.572562 | orchestrator | 2026-04-09 00:48:35.572572 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-09 00:48:35.572582 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:00.826) 0:02:31.484 ******** 2026-04-09 00:48:35.572593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.572603 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.572613 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.572624 | orchestrator | 2026-04-09 00:48:35.572634 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-09 00:48:35.572655 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:00.247) 0:02:31.732 ******** 2026-04-09 00:48:35.572666 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.572676 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.572687 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.572697 | orchestrator | 2026-04-09 00:48:35.572719 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-09 00:48:35.572730 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:00.394) 0:02:32.127 ******** 2026-04-09 00:48:35.572741 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.572751 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.572761 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.572771 | orchestrator | 2026-04-09 00:48:35.572781 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-09 00:48:35.572792 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:00.748) 0:02:32.875 ******** 2026-04-09 00:48:35.572801 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.572812 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.572822 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.572832 | orchestrator | 2026-04-09 00:48:35.572842 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-09 00:48:35.572853 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:00.705) 0:02:33.581 ******** 2026-04-09 00:48:35.572863 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:48:35.572873 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:48:35.572884 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-09 00:48:35.572894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:48:35.572904 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:48:35.572915 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-09 00:48:35.572926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:48:35.572943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:48:35.572970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-09 00:48:35.572988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-09 00:48:35.573012 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:48:35.573030 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:48:35.573047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-09 00:48:35.573065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:48:35.573077 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:48:35.573087 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-09 00:48:35.573098 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:48:35.573111 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:48:35.573135 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-09 00:48:35.573155 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-09 00:48:35.573172 | orchestrator | 2026-04-09 00:48:35.573205 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-09 00:48:35.573223 | orchestrator | 2026-04-09 00:48:35.573239 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-09 00:48:35.573257 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:03.224) 0:02:36.805 ******** 2026-04-09 00:48:35.573274 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.573291 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.573308 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.573325 | orchestrator | 2026-04-09 00:48:35.573343 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-09 00:48:35.573359 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:00.311) 0:02:37.116 ******** 2026-04-09 00:48:35.573374 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.573386 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.573410 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.573430 | orchestrator | 2026-04-09 00:48:35.573446 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-09 00:48:35.573462 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.763) 0:02:37.880 ******** 2026-04-09 00:48:35.573537 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.573556 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.573570 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.573585 | orchestrator | 2026-04-09 00:48:35.573601 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-09 00:48:35.573616 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.279) 0:02:38.159 ******** 2026-04-09 00:48:35.573632 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:48:35.573649 | orchestrator | 2026-04-09 00:48:35.573665 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-09 00:48:35.573681 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.531) 0:02:38.690 ******** 2026-04-09 00:48:35.573697 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.573715 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.573744 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.573763 | orchestrator | 2026-04-09 00:48:35.573780 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-09 00:48:35.573796 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.282) 0:02:38.973 ******** 2026-04-09 00:48:35.573812 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.573828 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.573844 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.573859 | orchestrator | 2026-04-09 00:48:35.573875 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-09 00:48:35.573892 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.272) 0:02:39.245 ******** 2026-04-09 00:48:35.573908 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.573924 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.573940 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.573956 | orchestrator | 2026-04-09 00:48:35.573972 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-09 00:48:35.573989 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.429) 0:02:39.674 ******** 2026-04-09 00:48:35.574005 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.574064 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.574078 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.574092 | orchestrator | 2026-04-09 00:48:35.574105 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-09 00:48:35.574118 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:00.753) 0:02:40.428 ******** 2026-04-09 00:48:35.574131 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.574144 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.574157 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.574171 | orchestrator | 2026-04-09 00:48:35.574197 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-09 00:48:35.574211 | orchestrator | Thursday 09 April 2026 00:46:29 +0000 (0:00:01.221) 0:02:41.650 ******** 2026-04-09 00:48:35.574225 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.574238 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.574252 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.574265 | orchestrator | 2026-04-09 00:48:35.574279 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-09 00:48:35.574292 | orchestrator | Thursday 09 April 2026 00:46:31 +0000 (0:00:01.355) 0:02:43.006 ******** 2026-04-09 00:48:35.574305 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:48:35.574317 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:48:35.574329 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:48:35.574342 | orchestrator | 2026-04-09 00:48:35.574364 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:48:35.574378 | orchestrator | 2026-04-09 00:48:35.574392 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:48:35.574406 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:09.910) 0:02:52.916 ******** 2026-04-09 00:48:35.574420 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.574434 | orchestrator | 2026-04-09 00:48:35.574448 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:48:35.574457 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:00.717) 0:02:53.634 ******** 2026-04-09 00:48:35.574465 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.574493 | orchestrator | 2026-04-09 00:48:35.574503 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:48:35.574511 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:00.435) 0:02:54.070 ******** 2026-04-09 00:48:35.574520 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:48:35.574528 | orchestrator | 2026-04-09 00:48:35.574536 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:48:35.574545 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:00.584) 0:02:54.654 ******** 2026-04-09 00:48:35.574553 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.574564 | orchestrator | 2026-04-09 00:48:35.574576 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:48:35.574590 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:00.765) 0:02:55.419 ******** 2026-04-09 00:48:35.574604 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.574618 | orchestrator | 2026-04-09 00:48:35.574633 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:48:35.574646 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:00.496) 0:02:55.915 ******** 2026-04-09 00:48:35.574658 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:48:35.574673 | orchestrator | 2026-04-09 00:48:35.574687 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:48:35.574701 | orchestrator | Thursday 09 April 2026 00:46:45 +0000 (0:00:01.788) 0:02:57.704 ******** 2026-04-09 00:48:35.574716 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:48:35.574730 | orchestrator | 2026-04-09 00:48:35.574744 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:48:35.574753 | orchestrator | Thursday 09 April 2026 00:46:46 +0000 (0:00:00.897) 0:02:58.601 ******** 2026-04-09 00:48:35.574761 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.574770 | orchestrator | 2026-04-09 00:48:35.574778 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:48:35.574786 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.423) 0:02:59.025 ******** 2026-04-09 00:48:35.574795 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.574803 | orchestrator | 2026-04-09 00:48:35.574815 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-09 00:48:35.574830 | orchestrator | 2026-04-09 00:48:35.574843 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-09 00:48:35.574868 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.450) 0:02:59.476 ******** 2026-04-09 00:48:35.574883 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.574892 | orchestrator | 2026-04-09 00:48:35.574900 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-09 00:48:35.574908 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.161) 0:02:59.638 ******** 2026-04-09 00:48:35.574928 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:48:35.574937 | orchestrator | 2026-04-09 00:48:35.574945 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-09 00:48:35.574954 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:00.302) 0:02:59.940 ******** 2026-04-09 00:48:35.574962 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.574970 | orchestrator | 2026-04-09 00:48:35.574978 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-09 00:48:35.574987 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:00.889) 0:03:00.830 ******** 2026-04-09 00:48:35.574995 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.575003 | orchestrator | 2026-04-09 00:48:35.575011 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-09 00:48:35.575019 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:01.506) 0:03:02.336 ******** 2026-04-09 00:48:35.575027 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.575035 | orchestrator | 2026-04-09 00:48:35.575044 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-09 00:48:35.575052 | orchestrator | Thursday 09 April 2026 00:46:51 +0000 (0:00:01.075) 0:03:03.412 ******** 2026-04-09 00:48:35.575064 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.575078 | orchestrator | 2026-04-09 00:48:35.575093 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-09 00:48:35.575108 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:00.398) 0:03:03.810 ******** 2026-04-09 00:48:35.575122 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.575132 | orchestrator | 2026-04-09 00:48:35.575141 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-09 00:48:35.575149 | orchestrator | Thursday 09 April 2026 00:46:59 +0000 (0:00:07.000) 0:03:10.811 ******** 2026-04-09 00:48:35.575158 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.575166 | orchestrator | 2026-04-09 00:48:35.575174 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-09 00:48:35.575182 | orchestrator | Thursday 09 April 2026 00:47:11 +0000 (0:00:12.775) 0:03:23.586 ******** 2026-04-09 00:48:35.575190 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.575199 | orchestrator | 2026-04-09 00:48:35.575207 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-09 00:48:35.575215 | orchestrator | 2026-04-09 00:48:35.575223 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-09 00:48:35.575232 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:00.612) 0:03:24.199 ******** 2026-04-09 00:48:35.575246 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.575265 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.575279 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.575311 | orchestrator | 2026-04-09 00:48:35.575325 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-09 00:48:35.575337 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:00.310) 0:03:24.510 ******** 2026-04-09 00:48:35.575349 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575362 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.575375 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.575387 | orchestrator | 2026-04-09 00:48:35.575400 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-09 00:48:35.575412 | orchestrator | Thursday 09 April 2026 00:47:13 +0000 (0:00:00.568) 0:03:25.078 ******** 2026-04-09 00:48:35.575432 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:48:35.575445 | orchestrator | 2026-04-09 00:48:35.575457 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-09 00:48:35.575469 | orchestrator | Thursday 09 April 2026 00:47:13 +0000 (0:00:00.524) 0:03:25.603 ******** 2026-04-09 00:48:35.575497 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.575509 | orchestrator | 2026-04-09 00:48:35.575521 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-09 00:48:35.575532 | orchestrator | Thursday 09 April 2026 00:47:14 +0000 (0:00:00.967) 0:03:26.570 ******** 2026-04-09 00:48:35.575545 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.575556 | orchestrator | 2026-04-09 00:48:35.575568 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-09 00:48:35.575580 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:00.974) 0:03:27.545 ******** 2026-04-09 00:48:35.575593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575606 | orchestrator | 2026-04-09 00:48:35.575620 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-09 00:48:35.575635 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:00.154) 0:03:27.699 ******** 2026-04-09 00:48:35.575648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.575662 | orchestrator | 2026-04-09 00:48:35.575676 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-09 00:48:35.575689 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:01.268) 0:03:28.968 ******** 2026-04-09 00:48:35.575703 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575712 | orchestrator | 2026-04-09 00:48:35.575720 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-09 00:48:35.575728 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:00.183) 0:03:29.152 ******** 2026-04-09 00:48:35.575737 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575745 | orchestrator | 2026-04-09 00:48:35.575753 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-09 00:48:35.575762 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:00.340) 0:03:29.492 ******** 2026-04-09 00:48:35.575770 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575778 | orchestrator | 2026-04-09 00:48:35.575786 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-09 00:48:35.575794 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:00.135) 0:03:29.628 ******** 2026-04-09 00:48:35.575802 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.575811 | orchestrator | 2026-04-09 00:48:35.575819 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-09 00:48:35.575836 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:00.140) 0:03:29.768 ******** 2026-04-09 00:48:35.575845 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.575853 | orchestrator | 2026-04-09 00:48:35.575861 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-09 00:48:35.575869 | orchestrator | Thursday 09 April 2026 00:47:24 +0000 (0:00:06.041) 0:03:35.809 ******** 2026-04-09 00:48:35.575878 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-09 00:48:35.575886 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-09 00:48:35.575895 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-09 00:48:35.575904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-09 00:48:35.575912 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-09 00:48:35.575920 | orchestrator | 2026-04-09 00:48:35.575928 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-09 00:48:35.575936 | orchestrator | Thursday 09 April 2026 00:48:06 +0000 (0:00:42.603) 0:04:18.413 ******** 2026-04-09 00:48:35.575951 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.575960 | orchestrator | 2026-04-09 00:48:35.575974 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-09 00:48:35.575987 | orchestrator | Thursday 09 April 2026 00:48:07 +0000 (0:00:01.131) 0:04:19.545 ******** 2026-04-09 00:48:35.576001 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.576015 | orchestrator | 2026-04-09 00:48:35.576029 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-09 00:48:35.576044 | orchestrator | Thursday 09 April 2026 00:48:09 +0000 (0:00:01.666) 0:04:21.212 ******** 2026-04-09 00:48:35.576058 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:48:35.576072 | orchestrator | 2026-04-09 00:48:35.576086 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-09 00:48:35.576095 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:01.109) 0:04:22.322 ******** 2026-04-09 00:48:35.576103 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.576111 | orchestrator | 2026-04-09 00:48:35.576119 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-09 00:48:35.576128 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:00.113) 0:04:22.435 ******** 2026-04-09 00:48:35.576136 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-09 00:48:35.576144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-09 00:48:35.576153 | orchestrator | 2026-04-09 00:48:35.576161 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-09 00:48:35.576169 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:02.008) 0:04:24.444 ******** 2026-04-09 00:48:35.576177 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.576186 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.576194 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.576202 | orchestrator | 2026-04-09 00:48:35.576211 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-09 00:48:35.576219 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:00.304) 0:04:24.748 ******** 2026-04-09 00:48:35.576231 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.576244 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.576257 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.576271 | orchestrator | 2026-04-09 00:48:35.576284 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-09 00:48:35.576299 | orchestrator | 2026-04-09 00:48:35.576313 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-09 00:48:35.576326 | orchestrator | Thursday 09 April 2026 00:48:14 +0000 (0:00:01.110) 0:04:25.859 ******** 2026-04-09 00:48:35.576337 | orchestrator | ok: [testbed-manager] 2026-04-09 00:48:35.576346 | orchestrator | 2026-04-09 00:48:35.576354 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-09 00:48:35.576362 | orchestrator | Thursday 09 April 2026 00:48:14 +0000 (0:00:00.157) 0:04:26.017 ******** 2026-04-09 00:48:35.576370 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-09 00:48:35.576378 | orchestrator | 2026-04-09 00:48:35.576387 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-09 00:48:35.576395 | orchestrator | Thursday 09 April 2026 00:48:14 +0000 (0:00:00.242) 0:04:26.259 ******** 2026-04-09 00:48:35.576403 | orchestrator | changed: [testbed-manager] 2026-04-09 00:48:35.576411 | orchestrator | 2026-04-09 00:48:35.576420 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-09 00:48:35.576428 | orchestrator | 2026-04-09 00:48:35.576436 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-09 00:48:35.576444 | orchestrator | Thursday 09 April 2026 00:48:20 +0000 (0:00:06.380) 0:04:32.639 ******** 2026-04-09 00:48:35.576452 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:48:35.576461 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:48:35.576519 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:48:35.576529 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:48:35.576537 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:48:35.576545 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:48:35.576554 | orchestrator | 2026-04-09 00:48:35.576562 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-09 00:48:35.576570 | orchestrator | Thursday 09 April 2026 00:48:21 +0000 (0:00:00.821) 0:04:33.460 ******** 2026-04-09 00:48:35.576579 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:48:35.576587 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:48:35.576596 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:48:35.576611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:48:35.576620 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-09 00:48:35.576628 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:48:35.576636 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-09 00:48:35.576644 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:48:35.576653 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:48:35.576661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:48:35.576669 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:48:35.576677 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-09 00:48:35.576685 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-09 00:48:35.576693 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:48:35.576762 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:48:35.576784 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:48:35.576792 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-09 00:48:35.576800 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-09 00:48:35.576809 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:48:35.576817 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:48:35.576825 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-09 00:48:35.576833 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:48:35.576845 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:48:35.576858 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-09 00:48:35.576871 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:48:35.576884 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:48:35.576895 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-09 00:48:35.576907 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:48:35.576917 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:48:35.576927 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-09 00:48:35.576946 | orchestrator | 2026-04-09 00:48:35.576957 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-09 00:48:35.576968 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:09.782) 0:04:43.242 ******** 2026-04-09 00:48:35.576978 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.576990 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.577002 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.577009 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.577017 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.577024 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.577031 | orchestrator | 2026-04-09 00:48:35.577038 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-09 00:48:35.577045 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.572) 0:04:43.815 ******** 2026-04-09 00:48:35.577052 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:48:35.577059 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:48:35.577066 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:48:35.577073 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:48:35.577080 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:48:35.577087 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:48:35.577094 | orchestrator | 2026-04-09 00:48:35.577101 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:48:35.577108 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:48:35.577116 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:48:35.577123 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:48:35.577130 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-09 00:48:35.577138 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:48:35.577151 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:48:35.577159 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-09 00:48:35.577166 | orchestrator | 2026-04-09 00:48:35.577173 | orchestrator | 2026-04-09 00:48:35.577180 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:48:35.577187 | orchestrator | Thursday 09 April 2026 00:48:32 +0000 (0:00:00.394) 0:04:44.209 ******** 2026-04-09 00:48:35.577194 | orchestrator | =============================================================================== 2026-04-09 00:48:35.577201 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 64.77s 2026-04-09 00:48:35.577208 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.60s 2026-04-09 00:48:35.577215 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.05s 2026-04-09 00:48:35.577222 | orchestrator | kubectl : Install required packages ------------------------------------ 12.78s 2026-04-09 00:48:35.577229 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.91s 2026-04-09 00:48:35.577236 | orchestrator | Manage labels ----------------------------------------------------------- 9.78s 2026-04-09 00:48:35.577243 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.00s 2026-04-09 00:48:35.577250 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.71s 2026-04-09 00:48:35.577261 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.38s 2026-04-09 00:48:35.577268 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.04s 2026-04-09 00:48:35.577275 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.22s 2026-04-09 00:48:35.577282 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.21s 2026-04-09 00:48:35.577289 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.68s 2026-04-09 00:48:35.577299 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.29s 2026-04-09 00:48:35.577307 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.06s 2026-04-09 00:48:35.577314 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.01s 2026-04-09 00:48:35.577321 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.97s 2026-04-09 00:48:35.577328 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.85s 2026-04-09 00:48:35.577335 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.81s 2026-04-09 00:48:35.577342 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.79s 2026-04-09 00:48:35.577349 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 58de2dbe-176c-414e-98aa-e05369eb2f9f is in state STARTED 2026-04-09 00:48:35.577356 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:35.577363 | orchestrator | 2026-04-09 00:48:35 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:35.577370 | orchestrator | 2026-04-09 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:38.604622 | orchestrator | 2026-04-09 00:48:38 | INFO  | Task ec3ba8c3-da39-403a-b81f-b45faf847c3f is in state STARTED 2026-04-09 00:48:38.606198 | orchestrator | 2026-04-09 00:48:38 | INFO  | Task 58de2dbe-176c-414e-98aa-e05369eb2f9f is in state STARTED 2026-04-09 00:48:38.607943 | orchestrator | 2026-04-09 00:48:38 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:38.609067 | orchestrator | 2026-04-09 00:48:38 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:38.609281 | orchestrator | 2026-04-09 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:41.655614 | orchestrator | 2026-04-09 00:48:41 | INFO  | Task ec3ba8c3-da39-403a-b81f-b45faf847c3f is in state STARTED 2026-04-09 00:48:41.655695 | orchestrator | 2026-04-09 00:48:41 | INFO  | Task 58de2dbe-176c-414e-98aa-e05369eb2f9f is in state SUCCESS 2026-04-09 00:48:41.656569 | orchestrator | 2026-04-09 00:48:41 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:41.657024 | orchestrator | 2026-04-09 00:48:41 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:41.657236 | orchestrator | 2026-04-09 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:44.692756 | orchestrator | 2026-04-09 00:48:44 | INFO  | Task ec3ba8c3-da39-403a-b81f-b45faf847c3f is in state SUCCESS 2026-04-09 00:48:44.692957 | orchestrator | 2026-04-09 00:48:44 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:44.693113 | orchestrator | 2026-04-09 00:48:44 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:44.693136 | orchestrator | 2026-04-09 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:47.726376 | orchestrator | 2026-04-09 00:48:47 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:47.726533 | orchestrator | 2026-04-09 00:48:47 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:47.726547 | orchestrator | 2026-04-09 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:50.769953 | orchestrator | 2026-04-09 00:48:50 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:50.771870 | orchestrator | 2026-04-09 00:48:50 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:50.771947 | orchestrator | 2026-04-09 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:53.809362 | orchestrator | 2026-04-09 00:48:53 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:53.809457 | orchestrator | 2026-04-09 00:48:53 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:53.809566 | orchestrator | 2026-04-09 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:56.850936 | orchestrator | 2026-04-09 00:48:56 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:56.851033 | orchestrator | 2026-04-09 00:48:56 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:56.851043 | orchestrator | 2026-04-09 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:48:59.877796 | orchestrator | 2026-04-09 00:48:59 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:48:59.879853 | orchestrator | 2026-04-09 00:48:59 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:48:59.879919 | orchestrator | 2026-04-09 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:02.921679 | orchestrator | 2026-04-09 00:49:02 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:02.922288 | orchestrator | 2026-04-09 00:49:02 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:02.922325 | orchestrator | 2026-04-09 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:05.962838 | orchestrator | 2026-04-09 00:49:05 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:05.963892 | orchestrator | 2026-04-09 00:49:05 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:05.964205 | orchestrator | 2026-04-09 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:08.994431 | orchestrator | 2026-04-09 00:49:08 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:08.995956 | orchestrator | 2026-04-09 00:49:08 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:08.995996 | orchestrator | 2026-04-09 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:12.034935 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:12.036153 | orchestrator | 2026-04-09 00:49:12 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:12.036205 | orchestrator | 2026-04-09 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:15.075893 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:15.076717 | orchestrator | 2026-04-09 00:49:15 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:15.076873 | orchestrator | 2026-04-09 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:18.110945 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:18.114039 | orchestrator | 2026-04-09 00:49:18 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:18.114095 | orchestrator | 2026-04-09 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:21.155960 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:21.157984 | orchestrator | 2026-04-09 00:49:21 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:21.158060 | orchestrator | 2026-04-09 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:24.198378 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:24.200648 | orchestrator | 2026-04-09 00:49:24 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:24.200692 | orchestrator | 2026-04-09 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:27.250967 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:27.253055 | orchestrator | 2026-04-09 00:49:27 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:27.253102 | orchestrator | 2026-04-09 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:30.295866 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:30.297492 | orchestrator | 2026-04-09 00:49:30 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:30.297573 | orchestrator | 2026-04-09 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:33.351262 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:33.353379 | orchestrator | 2026-04-09 00:49:33 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:33.353430 | orchestrator | 2026-04-09 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:36.396862 | orchestrator | 2026-04-09 00:49:36 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:36.399248 | orchestrator | 2026-04-09 00:49:36 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:36.399350 | orchestrator | 2026-04-09 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:39.440486 | orchestrator | 2026-04-09 00:49:39 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:39.440919 | orchestrator | 2026-04-09 00:49:39 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:39.440951 | orchestrator | 2026-04-09 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:42.472496 | orchestrator | 2026-04-09 00:49:42 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:42.474401 | orchestrator | 2026-04-09 00:49:42 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:42.474458 | orchestrator | 2026-04-09 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:45.514123 | orchestrator | 2026-04-09 00:49:45 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:45.516578 | orchestrator | 2026-04-09 00:49:45 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:45.516648 | orchestrator | 2026-04-09 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:48.550419 | orchestrator | 2026-04-09 00:49:48 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:48.550518 | orchestrator | 2026-04-09 00:49:48 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:48.550529 | orchestrator | 2026-04-09 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:51.597301 | orchestrator | 2026-04-09 00:49:51 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:51.598755 | orchestrator | 2026-04-09 00:49:51 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:51.598817 | orchestrator | 2026-04-09 00:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:54.634347 | orchestrator | 2026-04-09 00:49:54 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:54.634851 | orchestrator | 2026-04-09 00:49:54 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:54.635350 | orchestrator | 2026-04-09 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:49:57.676310 | orchestrator | 2026-04-09 00:49:57 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:49:57.678258 | orchestrator | 2026-04-09 00:49:57 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:49:57.678324 | orchestrator | 2026-04-09 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:00.727177 | orchestrator | 2026-04-09 00:50:00 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:00.727313 | orchestrator | 2026-04-09 00:50:00 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:00.730205 | orchestrator | 2026-04-09 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:03.764146 | orchestrator | 2026-04-09 00:50:03 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:03.767020 | orchestrator | 2026-04-09 00:50:03 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:03.767089 | orchestrator | 2026-04-09 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:06.797913 | orchestrator | 2026-04-09 00:50:06 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:06.799456 | orchestrator | 2026-04-09 00:50:06 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:06.799532 | orchestrator | 2026-04-09 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:09.838779 | orchestrator | 2026-04-09 00:50:09 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:09.840391 | orchestrator | 2026-04-09 00:50:09 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:09.840433 | orchestrator | 2026-04-09 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:12.875516 | orchestrator | 2026-04-09 00:50:12 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:12.875703 | orchestrator | 2026-04-09 00:50:12 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:12.875736 | orchestrator | 2026-04-09 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:15.914257 | orchestrator | 2026-04-09 00:50:15 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:15.916330 | orchestrator | 2026-04-09 00:50:15 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:15.916447 | orchestrator | 2026-04-09 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:18.958996 | orchestrator | 2026-04-09 00:50:18 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:18.960633 | orchestrator | 2026-04-09 00:50:18 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:18.960718 | orchestrator | 2026-04-09 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:21.987335 | orchestrator | 2026-04-09 00:50:21 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:21.987471 | orchestrator | 2026-04-09 00:50:21 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state STARTED 2026-04-09 00:50:21.987488 | orchestrator | 2026-04-09 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:25.024745 | orchestrator | 2026-04-09 00:50:25 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:25.034082 | orchestrator | 2026-04-09 00:50:25 | INFO  | Task 00400405-8791-41bf-85e9-3dd437e7459f is in state SUCCESS 2026-04-09 00:50:25.034602 | orchestrator | 2026-04-09 00:50:25.034650 | orchestrator | 2026-04-09 00:50:25.034669 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-09 00:50:25.034689 | orchestrator | 2026-04-09 00:50:25.034706 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:25.034722 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-04-09 00:50:25.034734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:25.034745 | orchestrator | 2026-04-09 00:50:25.034755 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:25.034765 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:01.065) 0:00:01.283 ******** 2026-04-09 00:50:25.034776 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:25.034787 | orchestrator | 2026-04-09 00:50:25.034797 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-09 00:50:25.034807 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:01.345) 0:00:02.629 ******** 2026-04-09 00:50:25.034854 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:25.034865 | orchestrator | 2026-04-09 00:50:25.034875 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:25.034886 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:25.034898 | orchestrator | 2026-04-09 00:50:25.034908 | orchestrator | 2026-04-09 00:50:25.034918 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:25.034928 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:00.401) 0:00:03.031 ******** 2026-04-09 00:50:25.034938 | orchestrator | =============================================================================== 2026-04-09 00:50:25.034949 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.35s 2026-04-09 00:50:25.034959 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.07s 2026-04-09 00:50:25.034969 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-04-09 00:50:25.035034 | orchestrator | 2026-04-09 00:50:25.035044 | orchestrator | 2026-04-09 00:50:25.035054 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-09 00:50:25.035064 | orchestrator | 2026-04-09 00:50:25.035074 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-09 00:50:25.035084 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-09 00:50:25.035153 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:25.035165 | orchestrator | 2026-04-09 00:50:25.035203 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-09 00:50:25.035214 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.732) 0:00:00.918 ******** 2026-04-09 00:50:25.035226 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:25.035243 | orchestrator | 2026-04-09 00:50:25.035260 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-09 00:50:25.035285 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.522) 0:00:01.441 ******** 2026-04-09 00:50:25.035303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-09 00:50:25.035319 | orchestrator | 2026-04-09 00:50:25.035336 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-09 00:50:25.035374 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:00.943) 0:00:02.384 ******** 2026-04-09 00:50:25.035555 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:25.035576 | orchestrator | 2026-04-09 00:50:25.035593 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-09 00:50:25.035610 | orchestrator | Thursday 09 April 2026 00:48:38 +0000 (0:00:01.099) 0:00:03.483 ******** 2026-04-09 00:50:25.035742 | orchestrator | changed: [testbed-manager] 2026-04-09 00:50:25.035762 | orchestrator | 2026-04-09 00:50:25.035781 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-09 00:50:25.035798 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:00.508) 0:00:03.992 ******** 2026-04-09 00:50:25.035814 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:25.035832 | orchestrator | 2026-04-09 00:50:25.035850 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-09 00:50:25.035946 | orchestrator | Thursday 09 April 2026 00:48:40 +0000 (0:00:01.518) 0:00:05.511 ******** 2026-04-09 00:50:25.035965 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:50:25.035983 | orchestrator | 2026-04-09 00:50:25.035999 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-09 00:50:25.036016 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:00.813) 0:00:06.324 ******** 2026-04-09 00:50:25.036033 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:25.036051 | orchestrator | 2026-04-09 00:50:25.036064 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-09 00:50:25.036077 | orchestrator | Thursday 09 April 2026 00:48:42 +0000 (0:00:00.344) 0:00:06.669 ******** 2026-04-09 00:50:25.036091 | orchestrator | ok: [testbed-manager] 2026-04-09 00:50:25.036119 | orchestrator | 2026-04-09 00:50:25.036146 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:25.036160 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:50:25.036174 | orchestrator | 2026-04-09 00:50:25.036187 | orchestrator | 2026-04-09 00:50:25.036200 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:25.036231 | orchestrator | Thursday 09 April 2026 00:48:42 +0000 (0:00:00.261) 0:00:06.931 ******** 2026-04-09 00:50:25.036246 | orchestrator | =============================================================================== 2026-04-09 00:50:25.036261 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2026-04-09 00:50:25.036275 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2026-04-09 00:50:25.036288 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-04-09 00:50:25.036334 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.81s 2026-04-09 00:50:25.036350 | orchestrator | Get home directory of operator user ------------------------------------- 0.73s 2026-04-09 00:50:25.036474 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2026-04-09 00:50:25.036491 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2026-04-09 00:50:25.036500 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.35s 2026-04-09 00:50:25.036521 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2026-04-09 00:50:25.036529 | orchestrator | 2026-04-09 00:50:25.036931 | orchestrator | 2026-04-09 00:50:25.036947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:50:25.036955 | orchestrator | 2026-04-09 00:50:25.036964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:50:25.036972 | orchestrator | Thursday 09 April 2026 00:45:01 +0000 (0:00:00.438) 0:00:00.438 ******** 2026-04-09 00:50:25.036981 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.036990 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.036998 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.037006 | orchestrator | 2026-04-09 00:50:25.037015 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:50:25.037023 | orchestrator | Thursday 09 April 2026 00:45:02 +0000 (0:00:00.628) 0:00:01.067 ******** 2026-04-09 00:50:25.037032 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-09 00:50:25.037040 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-09 00:50:25.037049 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-09 00:50:25.037057 | orchestrator | 2026-04-09 00:50:25.037065 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-09 00:50:25.037073 | orchestrator | 2026-04-09 00:50:25.037081 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:50:25.037090 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.716) 0:00:01.783 ******** 2026-04-09 00:50:25.037099 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.037107 | orchestrator | 2026-04-09 00:50:25.037115 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-09 00:50:25.037124 | orchestrator | Thursday 09 April 2026 00:45:04 +0000 (0:00:01.101) 0:00:02.885 ******** 2026-04-09 00:50:25.037132 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.037140 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.037148 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.037157 | orchestrator | 2026-04-09 00:50:25.037165 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-09 00:50:25.037173 | orchestrator | Thursday 09 April 2026 00:45:06 +0000 (0:00:02.336) 0:00:05.221 ******** 2026-04-09 00:50:25.037182 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.037190 | orchestrator | 2026-04-09 00:50:25.037198 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-09 00:50:25.037207 | orchestrator | Thursday 09 April 2026 00:45:07 +0000 (0:00:00.828) 0:00:06.050 ******** 2026-04-09 00:50:25.037215 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.037223 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.037231 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.037240 | orchestrator | 2026-04-09 00:50:25.037248 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-09 00:50:25.037256 | orchestrator | Thursday 09 April 2026 00:45:08 +0000 (0:00:01.218) 0:00:07.268 ******** 2026-04-09 00:50:25.037264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037273 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037314 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-09 00:50:25.037322 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:50:25.037340 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:50:25.037404 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-09 00:50:25.037414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:50:25.037422 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:50:25.037430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-09 00:50:25.037439 | orchestrator | 2026-04-09 00:50:25.037489 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-09 00:50:25.037498 | orchestrator | Thursday 09 April 2026 00:45:12 +0000 (0:00:03.603) 0:00:10.872 ******** 2026-04-09 00:50:25.037506 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:50:25.037515 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:50:25.037523 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:50:25.037531 | orchestrator | 2026-04-09 00:50:25.037561 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-09 00:50:25.037571 | orchestrator | Thursday 09 April 2026 00:45:14 +0000 (0:00:01.708) 0:00:12.581 ******** 2026-04-09 00:50:25.037581 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-09 00:50:25.037591 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-09 00:50:25.037601 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-09 00:50:25.037610 | orchestrator | 2026-04-09 00:50:25.037620 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-09 00:50:25.037630 | orchestrator | Thursday 09 April 2026 00:45:16 +0000 (0:00:02.003) 0:00:14.584 ******** 2026-04-09 00:50:25.037639 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-09 00:50:25.037649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.037667 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-09 00:50:25.037677 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.037687 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-09 00:50:25.037696 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.037706 | orchestrator | 2026-04-09 00:50:25.037717 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-09 00:50:25.037817 | orchestrator | Thursday 09 April 2026 00:45:16 +0000 (0:00:00.727) 0:00:15.311 ******** 2026-04-09 00:50:25.037830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.037916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.037925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.037934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.037954 | orchestrator | 2026-04-09 00:50:25.037962 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-09 00:50:25.037971 | orchestrator | Thursday 09 April 2026 00:45:18 +0000 (0:00:02.044) 0:00:17.356 ******** 2026-04-09 00:50:25.037980 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.037988 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.037997 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.038005 | orchestrator | 2026-04-09 00:50:25.038060 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-09 00:50:25.038072 | orchestrator | Thursday 09 April 2026 00:45:19 +0000 (0:00:01.040) 0:00:18.397 ******** 2026-04-09 00:50:25.038081 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-09 00:50:25.038090 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-09 00:50:25.038098 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-09 00:50:25.038106 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-09 00:50:25.038132 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-09 00:50:25.038140 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-09 00:50:25.038148 | orchestrator | 2026-04-09 00:50:25.038157 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-09 00:50:25.038165 | orchestrator | Thursday 09 April 2026 00:45:22 +0000 (0:00:02.605) 0:00:21.002 ******** 2026-04-09 00:50:25.038173 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.038182 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.038190 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.038198 | orchestrator | 2026-04-09 00:50:25.038206 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-09 00:50:25.038215 | orchestrator | Thursday 09 April 2026 00:45:23 +0000 (0:00:01.117) 0:00:22.120 ******** 2026-04-09 00:50:25.038223 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.038231 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.038239 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.038248 | orchestrator | 2026-04-09 00:50:25.038256 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-09 00:50:25.038264 | orchestrator | Thursday 09 April 2026 00:45:25 +0000 (0:00:01.573) 0:00:23.693 ******** 2026-04-09 00:50:25.038273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.038291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.038300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038325 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.038334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.038346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.038355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038378 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.038440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.038455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.038464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038485 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.038494 | orchestrator | 2026-04-09 00:50:25.038502 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-09 00:50:25.038511 | orchestrator | Thursday 09 April 2026 00:45:26 +0000 (0:00:01.574) 0:00:25.268 ******** 2026-04-09 00:50:25.038519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.038626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a', '__omit_place_holder__210d853232b9d1d4cef78a115128c0bf865de41a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-09 00:50:25.038642 | orchestrator | 2026-04-09 00:50:25.038649 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-09 00:50:25.038659 | orchestrator | Thursday 09 April 2026 00:45:31 +0000 (0:00:05.230) 0:00:30.498 ******** 2026-04-09 00:50:25.038667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.038724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.038732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.038739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.038750 | orchestrator | 2026-04-09 00:50:25.038757 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-09 00:50:25.038764 | orchestrator | Thursday 09 April 2026 00:45:35 +0000 (0:00:03.433) 0:00:33.931 ******** 2026-04-09 00:50:25.038772 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:50:25.038783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:50:25.038790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-09 00:50:25.038797 | orchestrator | 2026-04-09 00:50:25.038804 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-09 00:50:25.038811 | orchestrator | Thursday 09 April 2026 00:45:37 +0000 (0:00:02.167) 0:00:36.099 ******** 2026-04-09 00:50:25.038818 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:50:25.038825 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:50:25.038832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-09 00:50:25.038839 | orchestrator | 2026-04-09 00:50:25.038846 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-09 00:50:25.038853 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:03.709) 0:00:39.808 ******** 2026-04-09 00:50:25.038860 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.038867 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.038874 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.038881 | orchestrator | 2026-04-09 00:50:25.038888 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-09 00:50:25.038895 | orchestrator | Thursday 09 April 2026 00:45:41 +0000 (0:00:00.568) 0:00:40.377 ******** 2026-04-09 00:50:25.038902 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:50:25.038911 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:50:25.038918 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-09 00:50:25.038925 | orchestrator | 2026-04-09 00:50:25.038932 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-09 00:50:25.038939 | orchestrator | Thursday 09 April 2026 00:45:43 +0000 (0:00:02.145) 0:00:42.523 ******** 2026-04-09 00:50:25.038946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:50:25.038953 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:50:25.038960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-09 00:50:25.038968 | orchestrator | 2026-04-09 00:50:25.038975 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-09 00:50:25.038982 | orchestrator | Thursday 09 April 2026 00:45:45 +0000 (0:00:01.950) 0:00:44.474 ******** 2026-04-09 00:50:25.038989 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.038996 | orchestrator | 2026-04-09 00:50:25.039006 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-09 00:50:25.039013 | orchestrator | Thursday 09 April 2026 00:45:46 +0000 (0:00:00.725) 0:00:45.199 ******** 2026-04-09 00:50:25.039025 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-09 00:50:25.039032 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-09 00:50:25.039039 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-09 00:50:25.039046 | orchestrator | 2026-04-09 00:50:25.039053 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-09 00:50:25.039060 | orchestrator | Thursday 09 April 2026 00:45:48 +0000 (0:00:01.733) 0:00:46.933 ******** 2026-04-09 00:50:25.039067 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-09 00:50:25.039074 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-09 00:50:25.039081 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-09 00:50:25.039088 | orchestrator | 2026-04-09 00:50:25.039095 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-09 00:50:25.039102 | orchestrator | Thursday 09 April 2026 00:45:49 +0000 (0:00:01.567) 0:00:48.501 ******** 2026-04-09 00:50:25.039109 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.039116 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.039123 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.039130 | orchestrator | 2026-04-09 00:50:25.039137 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-09 00:50:25.039144 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.270) 0:00:48.771 ******** 2026-04-09 00:50:25.039151 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.039158 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.039164 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.039171 | orchestrator | 2026-04-09 00:50:25.039178 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:50:25.039185 | orchestrator | Thursday 09 April 2026 00:45:50 +0000 (0:00:00.279) 0:00:49.051 ******** 2026-04-09 00:50:25.039206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.039271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.039279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.039286 | orchestrator | 2026-04-09 00:50:25.039293 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:50:25.039300 | orchestrator | Thursday 09 April 2026 00:45:53 +0000 (0:00:03.218) 0:00:52.269 ******** 2026-04-09 00:50:25.039308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039338 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.039346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039372 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.039396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039424 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.039431 | orchestrator | 2026-04-09 00:50:25.039439 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:50:25.039446 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:00.529) 0:00:52.799 ******** 2026-04-09 00:50:25.039453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039552 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.039559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039591 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.039599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.039606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.039619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.039626 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.039633 | orchestrator | 2026-04-09 00:50:25.039640 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-09 00:50:25.039659 | orchestrator | Thursday 09 April 2026 00:45:54 +0000 (0:00:00.711) 0:00:53.511 ******** 2026-04-09 00:50:25.039666 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:50:25.039687 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:50:25.039694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-09 00:50:25.039701 | orchestrator | 2026-04-09 00:50:25.039708 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-09 00:50:25.039715 | orchestrator | Thursday 09 April 2026 00:45:56 +0000 (0:00:01.487) 0:00:54.998 ******** 2026-04-09 00:50:25.039722 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:50:25.039729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:50:25.039736 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-09 00:50:25.039743 | orchestrator | 2026-04-09 00:50:25.039750 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-09 00:50:25.039757 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:01.621) 0:00:56.619 ******** 2026-04-09 00:50:25.039764 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:50:25.039771 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:50:25.039778 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-09 00:50:25.039785 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:50:25.039792 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.039799 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:50:25.039806 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.039813 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-09 00:50:25.039821 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.039828 | orchestrator | 2026-04-09 00:50:25.039835 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 00:50:25.039846 | orchestrator | Thursday 09 April 2026 00:45:58 +0000 (0:00:00.696) 0:00:57.316 ******** 2026-04-09 00:50:25.039854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.039888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.040049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.040058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.040071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.040079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.040086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.040102 | orchestrator | 2026-04-09 00:50:25.040109 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 00:50:25.040116 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:02.298) 0:00:59.615 ******** 2026-04-09 00:50:25.040123 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:50:25.040131 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.040138 | orchestrator | } 2026-04-09 00:50:25.040145 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:50:25.040157 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.040164 | orchestrator | } 2026-04-09 00:50:25.040172 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:50:25.040179 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.040186 | orchestrator | } 2026-04-09 00:50:25.040193 | orchestrator | 2026-04-09 00:50:25.040200 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:50:25.040207 | orchestrator | Thursday 09 April 2026 00:46:01 +0000 (0:00:00.300) 0:00:59.916 ******** 2026-04-09 00:50:25.040214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.040222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.040229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.040237 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.040249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.040256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.040269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.040277 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.040290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.040298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.040305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.040313 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.040320 | orchestrator | 2026-04-09 00:50:25.040327 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-09 00:50:25.040334 | orchestrator | Thursday 09 April 2026 00:46:02 +0000 (0:00:01.137) 0:01:01.054 ******** 2026-04-09 00:50:25.040341 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.040348 | orchestrator | 2026-04-09 00:50:25.040355 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-09 00:50:25.040362 | orchestrator | Thursday 09 April 2026 00:46:03 +0000 (0:00:00.766) 0:01:01.820 ******** 2026-04-09 00:50:25.040375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040514 | orchestrator | 2026-04-09 00:50:25.040521 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-09 00:50:25.040528 | orchestrator | Thursday 09 April 2026 00:46:06 +0000 (0:00:03.469) 0:01:05.289 ******** 2026-04-09 00:50:25.040535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.040548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040570 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.040578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.040593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040620 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.040627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.040635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.040642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040664 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.040671 | orchestrator | 2026-04-09 00:50:25.040679 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-09 00:50:25.040686 | orchestrator | Thursday 09 April 2026 00:46:07 +0000 (0:00:00.593) 0:01:05.882 ******** 2026-04-09 00:50:25.040694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040711 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.040718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040737 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.040744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.040758 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.040765 | orchestrator | 2026-04-09 00:50:25.040772 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-09 00:50:25.040780 | orchestrator | Thursday 09 April 2026 00:46:08 +0000 (0:00:00.890) 0:01:06.772 ******** 2026-04-09 00:50:25.040787 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.040794 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.040801 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.040808 | orchestrator | 2026-04-09 00:50:25.040815 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-09 00:50:25.040822 | orchestrator | Thursday 09 April 2026 00:46:09 +0000 (0:00:01.132) 0:01:07.905 ******** 2026-04-09 00:50:25.040829 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.040836 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.040843 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.040855 | orchestrator | 2026-04-09 00:50:25.040862 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-09 00:50:25.040869 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:01.815) 0:01:09.720 ******** 2026-04-09 00:50:25.040876 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.040883 | orchestrator | 2026-04-09 00:50:25.040890 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-09 00:50:25.040897 | orchestrator | Thursday 09 April 2026 00:46:11 +0000 (0:00:00.539) 0:01:10.260 ******** 2026-04-09 00:50:25.040908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.040976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.040995 | orchestrator | 2026-04-09 00:50:25.041002 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-09 00:50:25.041010 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:03.084) 0:01:13.344 ******** 2026-04-09 00:50:25.041023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.041031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041049 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.041057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.041069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041088 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.041096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.041106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.041121 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.041128 | orchestrator | 2026-04-09 00:50:25.041136 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-09 00:50:25.041143 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.815) 0:01:14.160 ******** 2026-04-09 00:50:25.041151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041831 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.041844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041868 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.041878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.041901 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.041913 | orchestrator | 2026-04-09 00:50:25.041923 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-09 00:50:25.041935 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:00.796) 0:01:14.957 ******** 2026-04-09 00:50:25.041945 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.041955 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.041964 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.041974 | orchestrator | 2026-04-09 00:50:25.041985 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-09 00:50:25.041994 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:01.228) 0:01:16.186 ******** 2026-04-09 00:50:25.042005 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.042168 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.042190 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.042200 | orchestrator | 2026-04-09 00:50:25.042210 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-09 00:50:25.042219 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:01.829) 0:01:18.015 ******** 2026-04-09 00:50:25.042228 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.042238 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.042247 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.042256 | orchestrator | 2026-04-09 00:50:25.042265 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-09 00:50:25.042292 | orchestrator | Thursday 09 April 2026 00:46:19 +0000 (0:00:00.332) 0:01:18.347 ******** 2026-04-09 00:50:25.042302 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.042312 | orchestrator | 2026-04-09 00:50:25.042321 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-09 00:50:25.042331 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:00.763) 0:01:19.110 ******** 2026-04-09 00:50:25.042343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:50:25.042409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:50:25.042421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-09 00:50:25.042431 | orchestrator | 2026-04-09 00:50:25.042441 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-09 00:50:25.042451 | orchestrator | Thursday 09 April 2026 00:46:22 +0000 (0:00:02.404) 0:01:21.515 ******** 2026-04-09 00:50:25.042463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:50:25.042473 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.042489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:50:25.042501 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.042514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-09 00:50:25.042521 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.042527 | orchestrator | 2026-04-09 00:50:25.042534 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-09 00:50:25.042540 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:01.552) 0:01:23.068 ******** 2026-04-09 00:50:25.042555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042573 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.042579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042593 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.042599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-09 00:50:25.042617 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.042629 | orchestrator | 2026-04-09 00:50:25.042636 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-09 00:50:25.042642 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:02.034) 0:01:25.102 ******** 2026-04-09 00:50:25.042649 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.042655 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.042661 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.042668 | orchestrator | 2026-04-09 00:50:25.042675 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-09 00:50:25.042681 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.362) 0:01:25.465 ******** 2026-04-09 00:50:25.042688 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.042694 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.042700 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.042707 | orchestrator | 2026-04-09 00:50:25.042713 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-09 00:50:25.042720 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:01.173) 0:01:26.639 ******** 2026-04-09 00:50:25.042726 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.042732 | orchestrator | 2026-04-09 00:50:25.042739 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-09 00:50:25.042745 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:00.753) 0:01:27.392 ******** 2026-04-09 00:50:25.042760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.042770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.042818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.042836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.042888 | orchestrator | 2026-04-09 00:50:25.042895 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-09 00:50:25.042903 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:03.215) 0:01:30.608 ******** 2026-04-09 00:50:25.042911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.042926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.042935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043211 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.043222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043233 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.043244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.043330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043444 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.043456 | orchestrator | 2026-04-09 00:50:25.043468 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-09 00:50:25.043479 | orchestrator | Thursday 09 April 2026 00:46:32 +0000 (0:00:00.637) 0:01:31.246 ******** 2026-04-09 00:50:25.043508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043525 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.043532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043545 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.043552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.043563 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.043569 | orchestrator | 2026-04-09 00:50:25.043575 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-09 00:50:25.043580 | orchestrator | Thursday 09 April 2026 00:46:33 +0000 (0:00:01.008) 0:01:32.255 ******** 2026-04-09 00:50:25.043586 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.043598 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.043604 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.043610 | orchestrator | 2026-04-09 00:50:25.043615 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-09 00:50:25.043621 | orchestrator | Thursday 09 April 2026 00:46:34 +0000 (0:00:01.146) 0:01:33.401 ******** 2026-04-09 00:50:25.043627 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.043632 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.043638 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.043644 | orchestrator | 2026-04-09 00:50:25.043649 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-09 00:50:25.043660 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:01.958) 0:01:35.360 ******** 2026-04-09 00:50:25.043666 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.043671 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.043677 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.043682 | orchestrator | 2026-04-09 00:50:25.043689 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-09 00:50:25.043699 | orchestrator | Thursday 09 April 2026 00:46:37 +0000 (0:00:00.276) 0:01:35.637 ******** 2026-04-09 00:50:25.043707 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.043715 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.043725 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.043737 | orchestrator | 2026-04-09 00:50:25.043789 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-09 00:50:25.043799 | orchestrator | Thursday 09 April 2026 00:46:37 +0000 (0:00:00.285) 0:01:35.923 ******** 2026-04-09 00:50:25.043807 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.043816 | orchestrator | 2026-04-09 00:50:25.043825 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-09 00:50:25.043833 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:00.814) 0:01:36.737 ******** 2026-04-09 00:50:25.043842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.043917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.043929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.043985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.043992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.044004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.044067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.044084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044140 | orchestrator | 2026-04-09 00:50:25.044149 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-09 00:50:25.044159 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:03.522) 0:01:40.259 ******** 2026-04-09 00:50:25.044175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.044186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.044195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044252 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.044258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.044265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.044276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044316 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.044321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.044331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-09 00:50:25.044346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.044411 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.044418 | orchestrator | 2026-04-09 00:50:25.044424 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-09 00:50:25.044430 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:00.981) 0:01:41.241 ******** 2026-04-09 00:50:25.044437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044450 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.044459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044476 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.044481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.044493 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.044499 | orchestrator | 2026-04-09 00:50:25.044504 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-09 00:50:25.044510 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:01.128) 0:01:42.369 ******** 2026-04-09 00:50:25.044516 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.044521 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.044527 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.044532 | orchestrator | 2026-04-09 00:50:25.044538 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-09 00:50:25.044544 | orchestrator | Thursday 09 April 2026 00:46:44 +0000 (0:00:01.090) 0:01:43.460 ******** 2026-04-09 00:50:25.044550 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.044555 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.044561 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.044567 | orchestrator | 2026-04-09 00:50:25.044572 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-09 00:50:25.044582 | orchestrator | Thursday 09 April 2026 00:46:46 +0000 (0:00:01.851) 0:01:45.311 ******** 2026-04-09 00:50:25.044588 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.044594 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.044599 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.044605 | orchestrator | 2026-04-09 00:50:25.044611 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-09 00:50:25.044616 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.304) 0:01:45.615 ******** 2026-04-09 00:50:25.044622 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.044628 | orchestrator | 2026-04-09 00:50:25.044633 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-09 00:50:25.044639 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.767) 0:01:46.383 ******** 2026-04-09 00:50:25.044645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:50:25.045958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.045991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:50:25.046013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.047336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-09 00:50:25.047437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.047450 | orchestrator | 2026-04-09 00:50:25.047457 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-09 00:50:25.047463 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:05.402) 0:01:51.786 ******** 2026-04-09 00:50:25.047483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:50:25.047497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.047503 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.047516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:50:25.047525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.047535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.047546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-09 00:50:25.047555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.047566 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.047571 | orchestrator | 2026-04-09 00:50:25.047577 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-09 00:50:25.047583 | orchestrator | Thursday 09 April 2026 00:46:56 +0000 (0:00:03.544) 0:01:55.330 ******** 2026-04-09 00:50:25.047589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.047613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047628 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.047634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-09 00:50:25.047647 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.047653 | orchestrator | 2026-04-09 00:50:25.047658 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-09 00:50:25.047664 | orchestrator | Thursday 09 April 2026 00:47:00 +0000 (0:00:04.082) 0:01:59.412 ******** 2026-04-09 00:50:25.047670 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.047676 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.047681 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.047687 | orchestrator | 2026-04-09 00:50:25.047693 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-09 00:50:25.047700 | orchestrator | Thursday 09 April 2026 00:47:02 +0000 (0:00:01.245) 0:02:00.657 ******** 2026-04-09 00:50:25.047706 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.047712 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.047718 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.047723 | orchestrator | 2026-04-09 00:50:25.047729 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-09 00:50:25.047735 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:01.894) 0:02:02.551 ******** 2026-04-09 00:50:25.047740 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.047746 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.047752 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.047757 | orchestrator | 2026-04-09 00:50:25.047763 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-09 00:50:25.047769 | orchestrator | Thursday 09 April 2026 00:47:04 +0000 (0:00:00.265) 0:02:02.817 ******** 2026-04-09 00:50:25.047774 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.047780 | orchestrator | 2026-04-09 00:50:25.047786 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-09 00:50:25.047792 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:00.788) 0:02:03.606 ******** 2026-04-09 00:50:25.047798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.047812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.047818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.047824 | orchestrator | 2026-04-09 00:50:25.047829 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-09 00:50:25.047835 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:04.543) 0:02:08.149 ******** 2026-04-09 00:50:25.047843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.047849 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.047856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.047861 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.047867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.047877 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.047883 | orchestrator | 2026-04-09 00:50:25.047892 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-09 00:50:25.047898 | orchestrator | Thursday 09 April 2026 00:47:10 +0000 (0:00:00.430) 0:02:08.579 ******** 2026-04-09 00:50:25.047905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047931 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.047937 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.047942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.047954 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.047960 | orchestrator | 2026-04-09 00:50:25.047965 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-09 00:50:25.047971 | orchestrator | Thursday 09 April 2026 00:47:10 +0000 (0:00:00.935) 0:02:09.515 ******** 2026-04-09 00:50:25.047977 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.047983 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.047988 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.047994 | orchestrator | 2026-04-09 00:50:25.048000 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-09 00:50:25.048005 | orchestrator | Thursday 09 April 2026 00:47:12 +0000 (0:00:01.624) 0:02:11.139 ******** 2026-04-09 00:50:25.048011 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.048016 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.048021 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.048026 | orchestrator | 2026-04-09 00:50:25.048031 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-09 00:50:25.048038 | orchestrator | Thursday 09 April 2026 00:47:14 +0000 (0:00:01.790) 0:02:12.929 ******** 2026-04-09 00:50:25.048043 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048048 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048053 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048058 | orchestrator | 2026-04-09 00:50:25.048063 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-09 00:50:25.048068 | orchestrator | Thursday 09 April 2026 00:47:14 +0000 (0:00:00.607) 0:02:13.537 ******** 2026-04-09 00:50:25.048078 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.048083 | orchestrator | 2026-04-09 00:50:25.048088 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-09 00:50:25.048093 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:00.967) 0:02:14.504 ******** 2026-04-09 00:50:25.048103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:50:25.048112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:50:25.048125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:50:25.048131 | orchestrator | 2026-04-09 00:50:25.048136 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-09 00:50:25.048141 | orchestrator | Thursday 09 April 2026 00:47:20 +0000 (0:00:04.942) 0:02:19.447 ******** 2026-04-09 00:50:25.048149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:50:25.048158 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:50:25.048173 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:50:25.048193 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048198 | orchestrator | 2026-04-09 00:50:25.048203 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-09 00:50:25.048208 | orchestrator | Thursday 09 April 2026 00:47:21 +0000 (0:00:01.114) 0:02:20.561 ******** 2026-04-09 00:50:25.048214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:50:25.048244 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:50:25.048287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048292 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-09 00:50:25.048303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-09 00:50:25.048308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-09 00:50:25.048313 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048318 | orchestrator | 2026-04-09 00:50:25.048323 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-09 00:50:25.048328 | orchestrator | Thursday 09 April 2026 00:47:23 +0000 (0:00:01.483) 0:02:22.044 ******** 2026-04-09 00:50:25.048334 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.048339 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.048344 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.048349 | orchestrator | 2026-04-09 00:50:25.048354 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-09 00:50:25.048362 | orchestrator | Thursday 09 April 2026 00:47:24 +0000 (0:00:01.073) 0:02:23.117 ******** 2026-04-09 00:50:25.048367 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.048372 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.048377 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.048399 | orchestrator | 2026-04-09 00:50:25.048407 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-09 00:50:25.048414 | orchestrator | Thursday 09 April 2026 00:47:26 +0000 (0:00:02.085) 0:02:25.203 ******** 2026-04-09 00:50:25.048421 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048428 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048436 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048444 | orchestrator | 2026-04-09 00:50:25.048452 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-09 00:50:25.048460 | orchestrator | Thursday 09 April 2026 00:47:27 +0000 (0:00:00.618) 0:02:25.822 ******** 2026-04-09 00:50:25.048468 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048476 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048484 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048491 | orchestrator | 2026-04-09 00:50:25.048503 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-09 00:50:25.048512 | orchestrator | Thursday 09 April 2026 00:47:27 +0000 (0:00:00.343) 0:02:26.165 ******** 2026-04-09 00:50:25.048573 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.048588 | orchestrator | 2026-04-09 00:50:25.048596 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-09 00:50:25.048604 | orchestrator | Thursday 09 April 2026 00:47:28 +0000 (0:00:00.974) 0:02:27.139 ******** 2026-04-09 00:50:25.048611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:50:25.048624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:50:25.048649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:50:25.048670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048683 | orchestrator | 2026-04-09 00:50:25.048689 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-09 00:50:25.048694 | orchestrator | Thursday 09 April 2026 00:47:32 +0000 (0:00:04.305) 0:02:31.445 ******** 2026-04-09 00:50:25.048702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:50:25.048708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048718 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:50:25.048736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:50:25.048760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:50:25.048769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:50:25.048778 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048783 | orchestrator | 2026-04-09 00:50:25.048788 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-09 00:50:25.048793 | orchestrator | Thursday 09 April 2026 00:47:33 +0000 (0:00:00.705) 0:02:32.150 ******** 2026-04-09 00:50:25.048799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048811 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048827 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-09 00:50:25.048845 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048850 | orchestrator | 2026-04-09 00:50:25.048855 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-09 00:50:25.048861 | orchestrator | Thursday 09 April 2026 00:47:34 +0000 (0:00:00.901) 0:02:33.051 ******** 2026-04-09 00:50:25.048866 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.048871 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.048876 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.048881 | orchestrator | 2026-04-09 00:50:25.048886 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-09 00:50:25.048891 | orchestrator | Thursday 09 April 2026 00:47:35 +0000 (0:00:01.152) 0:02:34.204 ******** 2026-04-09 00:50:25.048896 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.048901 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.048906 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.048911 | orchestrator | 2026-04-09 00:50:25.048916 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-09 00:50:25.048921 | orchestrator | Thursday 09 April 2026 00:47:37 +0000 (0:00:02.256) 0:02:36.461 ******** 2026-04-09 00:50:25.048926 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.048931 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.048939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.048944 | orchestrator | 2026-04-09 00:50:25.048949 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-09 00:50:25.048954 | orchestrator | Thursday 09 April 2026 00:47:38 +0000 (0:00:00.582) 0:02:37.044 ******** 2026-04-09 00:50:25.048959 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.048964 | orchestrator | 2026-04-09 00:50:25.048969 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-09 00:50:25.048974 | orchestrator | Thursday 09 April 2026 00:47:39 +0000 (0:00:00.943) 0:02:37.987 ******** 2026-04-09 00:50:25.048984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.048989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.048998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.049004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.049155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049160 | orchestrator | 2026-04-09 00:50:25.049166 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-09 00:50:25.049171 | orchestrator | Thursday 09 April 2026 00:47:43 +0000 (0:00:03.712) 0:02:41.700 ******** 2026-04-09 00:50:25.049176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049200 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.049205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049252 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.049257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049270 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.049276 | orchestrator | 2026-04-09 00:50:25.049285 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-09 00:50:25.049290 | orchestrator | Thursday 09 April 2026 00:47:44 +0000 (0:00:01.010) 0:02:42.711 ******** 2026-04-09 00:50:25.049296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.049312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049322 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.049327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.049376 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.049404 | orchestrator | 2026-04-09 00:50:25.049410 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-09 00:50:25.049415 | orchestrator | Thursday 09 April 2026 00:47:45 +0000 (0:00:00.963) 0:02:43.674 ******** 2026-04-09 00:50:25.049420 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.049425 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.049430 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.049435 | orchestrator | 2026-04-09 00:50:25.049440 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-09 00:50:25.049445 | orchestrator | Thursday 09 April 2026 00:47:46 +0000 (0:00:01.099) 0:02:44.773 ******** 2026-04-09 00:50:25.049450 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.049455 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.049460 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.049465 | orchestrator | 2026-04-09 00:50:25.049473 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-09 00:50:25.049481 | orchestrator | Thursday 09 April 2026 00:47:48 +0000 (0:00:02.126) 0:02:46.900 ******** 2026-04-09 00:50:25.049489 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.049498 | orchestrator | 2026-04-09 00:50:25.049509 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-09 00:50:25.049518 | orchestrator | Thursday 09 April 2026 00:47:49 +0000 (0:00:01.385) 0:02:48.286 ******** 2026-04-09 00:50:25.049526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.049546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.049671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.049743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049765 | orchestrator | 2026-04-09 00:50:25.049772 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-09 00:50:25.049781 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:03.307) 0:02:51.593 ******** 2026-04-09 00:50:25.049793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049880 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.049886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.049926 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.049984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release//manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.049995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release//manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.050058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release//manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.050072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release//manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.050080 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050089 | orchestrator | 2026-04-09 00:50:25.050102 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-09 00:50:25.050110 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.751) 0:02:52.345 ******** 2026-04-09 00:50:25.050118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050135 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050158 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.050173 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050178 | orchestrator | 2026-04-09 00:50:25.050271 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-09 00:50:25.050285 | orchestrator | Thursday 09 April 2026 00:47:55 +0000 (0:00:01.314) 0:02:53.660 ******** 2026-04-09 00:50:25.050293 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.050301 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.050309 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.050327 | orchestrator | 2026-04-09 00:50:25.050332 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-09 00:50:25.050338 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:01.064) 0:02:54.724 ******** 2026-04-09 00:50:25.050343 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.050348 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.050353 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.050358 | orchestrator | 2026-04-09 00:50:25.050363 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-09 00:50:25.050368 | orchestrator | Thursday 09 April 2026 00:47:57 +0000 (0:00:01.804) 0:02:56.529 ******** 2026-04-09 00:50:25.050373 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.050396 | orchestrator | 2026-04-09 00:50:25.050402 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-09 00:50:25.050407 | orchestrator | Thursday 09 April 2026 00:47:58 +0000 (0:00:00.971) 0:02:57.501 ******** 2026-04-09 00:50:25.050413 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:50:25.050418 | orchestrator | 2026-04-09 00:50:25.050423 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-09 00:50:25.050428 | orchestrator | Thursday 09 April 2026 00:48:00 +0000 (0:00:01.968) 0:02:59.470 ******** 2026-04-09 00:50:25.050439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050452 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050527 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050606 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050611 | orchestrator | 2026-04-09 00:50:25.050616 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-09 00:50:25.050621 | orchestrator | Thursday 09 April 2026 00:48:03 +0000 (0:00:02.634) 0:03:02.104 ******** 2026-04-09 00:50:25.050630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050641 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050697 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:50:25.050715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release//mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-09 00:50:25.050720 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050725 | orchestrator | 2026-04-09 00:50:25.050765 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-09 00:50:25.050772 | orchestrator | Thursday 09 April 2026 00:48:05 +0000 (0:00:02.102) 0:03:04.207 ******** 2026-04-09 00:50:25.050778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050789 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050808 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-09 00:50:25.050828 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050833 | orchestrator | 2026-04-09 00:50:25.050838 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-09 00:50:25.050843 | orchestrator | Thursday 09 April 2026 00:48:08 +0000 (0:00:02.638) 0:03:06.845 ******** 2026-04-09 00:50:25.050848 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.050854 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.050858 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.050863 | orchestrator | 2026-04-09 00:50:25.050904 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-09 00:50:25.050911 | orchestrator | Thursday 09 April 2026 00:48:10 +0000 (0:00:02.059) 0:03:08.905 ******** 2026-04-09 00:50:25.050917 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050922 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050927 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050932 | orchestrator | 2026-04-09 00:50:25.050938 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-09 00:50:25.050946 | orchestrator | Thursday 09 April 2026 00:48:11 +0000 (0:00:01.457) 0:03:10.363 ******** 2026-04-09 00:50:25.050954 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.050962 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.050985 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.050994 | orchestrator | 2026-04-09 00:50:25.051002 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-09 00:50:25.051010 | orchestrator | Thursday 09 April 2026 00:48:12 +0000 (0:00:00.305) 0:03:10.668 ******** 2026-04-09 00:50:25.051018 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.051026 | orchestrator | 2026-04-09 00:50:25.051034 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-09 00:50:25.051042 | orchestrator | Thursday 09 April 2026 00:48:13 +0000 (0:00:01.054) 0:03:11.723 ******** 2026-04-09 00:50:25.051051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:50:25.051065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:50:25.051081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-09 00:50:25.051090 | orchestrator | 2026-04-09 00:50:25.051103 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-09 00:50:25.051109 | orchestrator | Thursday 09 April 2026 00:48:14 +0000 (0:00:01.835) 0:03:13.558 ******** 2026-04-09 00:50:25.051163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:50:25.051171 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.051176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:50:25.051182 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.051187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release//memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-09 00:50:25.051196 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.051204 | orchestrator | 2026-04-09 00:50:25.051211 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-09 00:50:25.051230 | orchestrator | Thursday 09 April 2026 00:48:15 +0000 (0:00:00.415) 0:03:13.973 ******** 2026-04-09 00:50:25.051239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:50:25.051253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:50:25.051261 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.051269 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.051278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-09 00:50:25.051286 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.051294 | orchestrator | 2026-04-09 00:50:25.051302 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-09 00:50:25.051310 | orchestrator | Thursday 09 April 2026 00:48:15 +0000 (0:00:00.598) 0:03:14.572 ******** 2026-04-09 00:50:25.051317 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.051325 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.051334 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.051343 | orchestrator | 2026-04-09 00:50:25.051348 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-09 00:50:25.051353 | orchestrator | Thursday 09 April 2026 00:48:16 +0000 (0:00:00.710) 0:03:15.282 ******** 2026-04-09 00:50:25.051358 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.051363 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.051368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.051373 | orchestrator | 2026-04-09 00:50:25.051429 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-09 00:50:25.051436 | orchestrator | Thursday 09 April 2026 00:48:17 +0000 (0:00:01.269) 0:03:16.551 ******** 2026-04-09 00:50:25.051441 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.051446 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.051465 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.051470 | orchestrator | 2026-04-09 00:50:25.051476 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-09 00:50:25.051483 | orchestrator | Thursday 09 April 2026 00:48:18 +0000 (0:00:00.332) 0:03:16.884 ******** 2026-04-09 00:50:25.051491 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.051498 | orchestrator | 2026-04-09 00:50:25.051506 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-09 00:50:25.051513 | orchestrator | Thursday 09 April 2026 00:48:19 +0000 (0:00:01.130) 0:03:18.014 ******** 2026-04-09 00:50:25.051594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.051620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.051636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.051646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.051708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.051722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.051741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.051750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.051764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.051773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.051835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.051848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.051866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.051875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.051889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.051899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.051966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.051985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.051999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.052128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.052159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.052253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.052328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.052501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.052509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.052627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.052656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.052721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052728 | orchestrator | 2026-04-09 00:50:25.052733 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-09 00:50:25.052738 | orchestrator | Thursday 09 April 2026 00:48:24 +0000 (0:00:04.698) 0:03:22.712 ******** 2026-04-09 00:50:25.052744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.052753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.052801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.052809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.052833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.052888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.052893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.052906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.052915 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.052967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.052975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.052984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release//neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.052989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.053032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release//neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.053045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-09 00:50:25.053055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release//neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-09 00:50:25.053112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release//neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.053138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release//neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.053151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release//neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-09 00:50:25.053198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.053211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.053220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release//neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release//neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-09 00:50:25.053287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release//neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-09 00:50:25.053296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.053306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release//ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.053311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.053316 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.053356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-09 00:50:25.053363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release//neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-09 00:50:25.053368 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.053373 | orchestrator | 2026-04-09 00:50:25.053378 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-09 00:50:25.053413 | orchestrator | Thursday 09 April 2026 00:48:25 +0000 (0:00:01.703) 0:03:24.416 ******** 2026-04-09 00:50:25.053422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.053453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053473 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.053478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.053488 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.053493 | orchestrator | 2026-04-09 00:50:25.053498 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-09 00:50:25.053502 | orchestrator | Thursday 09 April 2026 00:48:27 +0000 (0:00:01.702) 0:03:26.119 ******** 2026-04-09 00:50:25.053507 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.053512 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.053517 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.053521 | orchestrator | 2026-04-09 00:50:25.053526 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-09 00:50:25.053531 | orchestrator | Thursday 09 April 2026 00:48:28 +0000 (0:00:01.222) 0:03:27.341 ******** 2026-04-09 00:50:25.053536 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.053540 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.053545 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.053550 | orchestrator | 2026-04-09 00:50:25.053555 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-09 00:50:25.053559 | orchestrator | Thursday 09 April 2026 00:48:30 +0000 (0:00:01.773) 0:03:29.115 ******** 2026-04-09 00:50:25.053564 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.053569 | orchestrator | 2026-04-09 00:50:25.053581 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-09 00:50:25.053633 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:01.072) 0:03:30.187 ******** 2026-04-09 00:50:25.053641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.053647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.053661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.053666 | orchestrator | 2026-04-09 00:50:25.053671 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-09 00:50:25.053676 | orchestrator | Thursday 09 April 2026 00:48:34 +0000 (0:00:03.372) 0:03:33.560 ******** 2026-04-09 00:50:25.053723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.053738 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.053745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.053759 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.053770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release//placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.053778 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.053786 | orchestrator | 2026-04-09 00:50:25.053794 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-09 00:50:25.053801 | orchestrator | Thursday 09 April 2026 00:48:35 +0000 (0:00:00.946) 0:03:34.506 ******** 2026-04-09 00:50:25.053809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053826 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.053834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053849 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.053909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.053921 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.053926 | orchestrator | 2026-04-09 00:50:25.053931 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-09 00:50:25.053935 | orchestrator | Thursday 09 April 2026 00:48:36 +0000 (0:00:00.814) 0:03:35.321 ******** 2026-04-09 00:50:25.053945 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.053950 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.053955 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.053960 | orchestrator | 2026-04-09 00:50:25.053965 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-09 00:50:25.053969 | orchestrator | Thursday 09 April 2026 00:48:37 +0000 (0:00:01.109) 0:03:36.430 ******** 2026-04-09 00:50:25.053974 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.053979 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.053984 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.053988 | orchestrator | 2026-04-09 00:50:25.053993 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-09 00:50:25.053998 | orchestrator | Thursday 09 April 2026 00:48:39 +0000 (0:00:01.911) 0:03:38.342 ******** 2026-04-09 00:50:25.054003 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.054008 | orchestrator | 2026-04-09 00:50:25.054012 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-09 00:50:25.054045 | orchestrator | Thursday 09 April 2026 00:48:41 +0000 (0:00:01.315) 0:03:39.658 ******** 2026-04-09 00:50:25.054057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.054225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054237 | orchestrator | 2026-04-09 00:50:25.054241 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-09 00:50:25.054247 | orchestrator | Thursday 09 April 2026 00:48:45 +0000 (0:00:04.695) 0:03:44.354 ******** 2026-04-09 00:50:25.054277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.054313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054353 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.054361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release//nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.054444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release//nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release//nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.054454 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.054458 | orchestrator | 2026-04-09 00:50:25.054463 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-09 00:50:25.054468 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.651) 0:03:45.006 ******** 2026-04-09 00:50:25.054473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.054503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054529 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.054533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.054581 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.054586 | orchestrator | 2026-04-09 00:50:25.054591 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-09 00:50:25.054596 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:01.411) 0:03:46.417 ******** 2026-04-09 00:50:25.054601 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.054606 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.054611 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.054616 | orchestrator | 2026-04-09 00:50:25.054620 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-09 00:50:25.054625 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:01.285) 0:03:47.702 ******** 2026-04-09 00:50:25.054630 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.054635 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.054640 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.054647 | orchestrator | 2026-04-09 00:50:25.054655 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-09 00:50:25.054662 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:02.012) 0:03:49.714 ******** 2026-04-09 00:50:25.054669 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.054677 | orchestrator | 2026-04-09 00:50:25.054684 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-09 00:50:25.054691 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:01.157) 0:03:50.872 ******** 2026-04-09 00:50:25.054697 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-09 00:50:25.054705 | orchestrator | 2026-04-09 00:50:25.054713 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-09 00:50:25.054720 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:00.946) 0:03:51.819 ******** 2026-04-09 00:50:25.054752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:50:25.054761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:50:25.054768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-09 00:50:25.054775 | orchestrator | 2026-04-09 00:50:25.054782 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-09 00:50:25.054790 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:03.589) 0:03:55.408 ******** 2026-04-09 00:50:25.054829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.054838 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.054844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.054851 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.054859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.054866 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.054873 | orchestrator | 2026-04-09 00:50:25.054881 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-09 00:50:25.054888 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:01.260) 0:03:56.669 ******** 2026-04-09 00:50:25.054896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054919 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.054925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054948 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.054955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-09 00:50:25.054970 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.054976 | orchestrator | 2026-04-09 00:50:25.054983 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:50:25.054990 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:01.383) 0:03:58.052 ******** 2026-04-09 00:50:25.054997 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.055004 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.055011 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.055018 | orchestrator | 2026-04-09 00:50:25.055026 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:50:25.055034 | orchestrator | Thursday 09 April 2026 00:49:01 +0000 (0:00:02.258) 0:04:00.310 ******** 2026-04-09 00:50:25.055042 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.055050 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.055058 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.055066 | orchestrator | 2026-04-09 00:50:25.055073 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-09 00:50:25.055081 | orchestrator | Thursday 09 April 2026 00:49:04 +0000 (0:00:02.588) 0:04:02.899 ******** 2026-04-09 00:50:25.055089 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-09 00:50:25.055096 | orchestrator | 2026-04-09 00:50:25.055103 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-09 00:50:25.055110 | orchestrator | Thursday 09 April 2026 00:49:05 +0000 (0:00:00.694) 0:04:03.594 ******** 2026-04-09 00:50:25.055150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055160 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055184 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055202 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055210 | orchestrator | 2026-04-09 00:50:25.055218 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-09 00:50:25.055226 | orchestrator | Thursday 09 April 2026 00:49:06 +0000 (0:00:01.101) 0:04:04.695 ******** 2026-04-09 00:50:25.055239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055247 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055264 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-09 00:50:25.055280 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055288 | orchestrator | 2026-04-09 00:50:25.055295 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-09 00:50:25.055303 | orchestrator | Thursday 09 April 2026 00:49:07 +0000 (0:00:01.150) 0:04:05.846 ******** 2026-04-09 00:50:25.055311 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055319 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055327 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055335 | orchestrator | 2026-04-09 00:50:25.055343 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:50:25.055351 | orchestrator | Thursday 09 April 2026 00:49:08 +0000 (0:00:01.298) 0:04:07.144 ******** 2026-04-09 00:50:25.055359 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.055412 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.055427 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.055434 | orchestrator | 2026-04-09 00:50:25.055441 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:50:25.055448 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:02.110) 0:04:09.255 ******** 2026-04-09 00:50:25.055454 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.055462 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.055469 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.055476 | orchestrator | 2026-04-09 00:50:25.055483 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-09 00:50:25.055490 | orchestrator | Thursday 09 April 2026 00:49:13 +0000 (0:00:02.518) 0:04:11.773 ******** 2026-04-09 00:50:25.055497 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-09 00:50:25.055505 | orchestrator | 2026-04-09 00:50:25.055512 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-09 00:50:25.055519 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.892) 0:04:12.665 ******** 2026-04-09 00:50:25.055526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055534 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055549 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055567 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055574 | orchestrator | 2026-04-09 00:50:25.055580 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-09 00:50:25.055588 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.960) 0:04:13.626 ******** 2026-04-09 00:50:25.055595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055601 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055621 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-09 00:50:25.055658 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055665 | orchestrator | 2026-04-09 00:50:25.055671 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-09 00:50:25.055678 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:01.072) 0:04:14.698 ******** 2026-04-09 00:50:25.055684 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055691 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.055697 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.055704 | orchestrator | 2026-04-09 00:50:25.055712 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-09 00:50:25.055719 | orchestrator | Thursday 09 April 2026 00:49:17 +0000 (0:00:01.261) 0:04:15.959 ******** 2026-04-09 00:50:25.055725 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.055731 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.055739 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.055744 | orchestrator | 2026-04-09 00:50:25.055748 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-09 00:50:25.055753 | orchestrator | Thursday 09 April 2026 00:49:19 +0000 (0:00:02.102) 0:04:18.062 ******** 2026-04-09 00:50:25.055757 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.055761 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.055765 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.055770 | orchestrator | 2026-04-09 00:50:25.055774 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-09 00:50:25.055778 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:02.624) 0:04:20.687 ******** 2026-04-09 00:50:25.055783 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.055787 | orchestrator | 2026-04-09 00:50:25.055791 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-09 00:50:25.055795 | orchestrator | Thursday 09 April 2026 00:49:23 +0000 (0:00:01.332) 0:04:22.019 ******** 2026-04-09 00:50:25.055804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:50:25.055809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.055819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.055852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:50:25.055857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.055868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-09 00:50:25.055891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.055901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.055905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.055926 | orchestrator | 2026-04-09 00:50:25.055930 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-09 00:50:25.055935 | orchestrator | Thursday 09 April 2026 00:49:26 +0000 (0:00:03.494) 0:04:25.514 ******** 2026-04-09 00:50:25.055953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:50:25.055958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.055962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.055978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.055982 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.055999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:50:25.056004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.056009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.056013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.056021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.056026 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-09 00:50:25.056035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-09 00:50:25.056052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.056078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-09 00:50:25.056084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-09 00:50:25.056091 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056096 | orchestrator | 2026-04-09 00:50:25.056100 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-09 00:50:25.056105 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:01.096) 0:04:26.610 ******** 2026-04-09 00:50:25.056110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056124 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.056128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056137 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-09 00:50:25.056150 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056155 | orchestrator | 2026-04-09 00:50:25.056159 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-09 00:50:25.056163 | orchestrator | Thursday 09 April 2026 00:49:28 +0000 (0:00:00.891) 0:04:27.501 ******** 2026-04-09 00:50:25.056168 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.056172 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.056176 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.056180 | orchestrator | 2026-04-09 00:50:25.056185 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-09 00:50:25.056189 | orchestrator | Thursday 09 April 2026 00:49:30 +0000 (0:00:01.307) 0:04:28.809 ******** 2026-04-09 00:50:25.056193 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.056198 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.056202 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.056207 | orchestrator | 2026-04-09 00:50:25.056224 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-09 00:50:25.056229 | orchestrator | Thursday 09 April 2026 00:49:32 +0000 (0:00:02.316) 0:04:31.126 ******** 2026-04-09 00:50:25.056233 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.056238 | orchestrator | 2026-04-09 00:50:25.056242 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-09 00:50:25.056246 | orchestrator | Thursday 09 April 2026 00:49:34 +0000 (0:00:01.583) 0:04:32.710 ******** 2026-04-09 00:50:25.056251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:25.056294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:25.056306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:25.056311 | orchestrator | 2026-04-09 00:50:25.056316 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-09 00:50:25.056320 | orchestrator | Thursday 09 April 2026 00:49:39 +0000 (0:00:05.053) 0:04:37.763 ******** 2026-04-09 00:50:25.056325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.056347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:25.056365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.056378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.056442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:25.056449 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.056485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:25.056499 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056506 | orchestrator | 2026-04-09 00:50:25.056512 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-09 00:50:25.056518 | orchestrator | Thursday 09 April 2026 00:49:40 +0000 (0:00:00.815) 0:04:38.579 ******** 2026-04-09 00:50:25.056523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.056531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056545 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.056552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.056562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056576 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.056590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-09 00:50:25.056604 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056615 | orchestrator | 2026-04-09 00:50:25.056619 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-09 00:50:25.056624 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:01.101) 0:04:39.680 ******** 2026-04-09 00:50:25.056628 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.056632 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056637 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056641 | orchestrator | 2026-04-09 00:50:25.056645 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-09 00:50:25.056666 | orchestrator | Thursday 09 April 2026 00:49:41 +0000 (0:00:00.363) 0:04:40.044 ******** 2026-04-09 00:50:25.056671 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.056676 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.056680 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.056685 | orchestrator | 2026-04-09 00:50:25.056689 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-09 00:50:25.056693 | orchestrator | Thursday 09 April 2026 00:49:42 +0000 (0:00:01.074) 0:04:41.119 ******** 2026-04-09 00:50:25.056698 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.056702 | orchestrator | 2026-04-09 00:50:25.056706 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-09 00:50:25.056711 | orchestrator | Thursday 09 April 2026 00:49:43 +0000 (0:00:01.429) 0:04:42.549 ******** 2026-04-09 00:50:25.056716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:50:25.056724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:50:25.056729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.056738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.056756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:50:25.056808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.056812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.056853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.056878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:25.056916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.056923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.056949 | orchestrator | 2026-04-09 00:50:25.056953 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-09 00:50:25.056957 | orchestrator | Thursday 09 April 2026 00:49:47 +0000 (0:00:03.631) 0:04:46.180 ******** 2026-04-09 00:50:25.056961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:50:25.056968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.056976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.056984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.057005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.057012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:50:25.057044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.057053 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.057081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.057086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:50:25.057104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057108 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:50:25.057120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:25.057144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release//prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-09 00:50:25.057148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:50:25.057161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:50:25.057165 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057169 | orchestrator | 2026-04-09 00:50:25.057173 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-09 00:50:25.057177 | orchestrator | Thursday 09 April 2026 00:49:48 +0000 (0:00:00.791) 0:04:46.971 ******** 2026-04-09 00:50:25.057181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057205 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057226 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-09 00:50:25.057241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-09 00:50:25.057253 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057257 | orchestrator | 2026-04-09 00:50:25.057261 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-09 00:50:25.057265 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:01.072) 0:04:48.044 ******** 2026-04-09 00:50:25.057269 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057273 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057277 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057281 | orchestrator | 2026-04-09 00:50:25.057285 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-09 00:50:25.057289 | orchestrator | Thursday 09 April 2026 00:49:49 +0000 (0:00:00.395) 0:04:48.439 ******** 2026-04-09 00:50:25.057292 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057296 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057300 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057304 | orchestrator | 2026-04-09 00:50:25.057308 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-09 00:50:25.057312 | orchestrator | Thursday 09 April 2026 00:49:50 +0000 (0:00:01.127) 0:04:49.567 ******** 2026-04-09 00:50:25.057316 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.057320 | orchestrator | 2026-04-09 00:50:25.057324 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-09 00:50:25.057330 | orchestrator | Thursday 09 April 2026 00:49:52 +0000 (0:00:01.331) 0:04:50.898 ******** 2026-04-09 00:50:25.057334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:25.057341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:25.057349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-09 00:50:25.057354 | orchestrator | 2026-04-09 00:50:25.057358 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-09 00:50:25.057362 | orchestrator | Thursday 09 April 2026 00:49:54 +0000 (0:00:02.496) 0:04:53.394 ******** 2026-04-09 00:50:25.057369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:50:25.057373 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:50:25.057403 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release//rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-09 00:50:25.057426 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057431 | orchestrator | 2026-04-09 00:50:25.057436 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-09 00:50:25.057440 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.372) 0:04:53.767 ******** 2026-04-09 00:50:25.057444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:50:25.057448 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:50:25.057456 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-09 00:50:25.057464 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057468 | orchestrator | 2026-04-09 00:50:25.057472 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-09 00:50:25.057476 | orchestrator | Thursday 09 April 2026 00:49:55 +0000 (0:00:00.578) 0:04:54.346 ******** 2026-04-09 00:50:25.057480 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057484 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057487 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057491 | orchestrator | 2026-04-09 00:50:25.057495 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-09 00:50:25.057499 | orchestrator | Thursday 09 April 2026 00:49:56 +0000 (0:00:00.365) 0:04:54.711 ******** 2026-04-09 00:50:25.057503 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057507 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057511 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057515 | orchestrator | 2026-04-09 00:50:25.057519 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-09 00:50:25.057523 | orchestrator | Thursday 09 April 2026 00:49:57 +0000 (0:00:01.155) 0:04:55.867 ******** 2026-04-09 00:50:25.057527 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.057531 | orchestrator | 2026-04-09 00:50:25.057534 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-09 00:50:25.057538 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:01.584) 0:04:57.452 ******** 2026-04-09 00:50:25.057545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:50:25.057555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:50:25.057560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-09 00:50:25.057565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.057572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.057582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-09 00:50:25.057587 | orchestrator | 2026-04-09 00:50:25.057591 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-09 00:50:25.057595 | orchestrator | Thursday 09 April 2026 00:50:04 +0000 (0:00:05.489) 0:05:02.941 ******** 2026-04-09 00:50:25.057599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:50:25.057606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.057611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:50:25.057625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.057629 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-09 00:50:25.057641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-09 00:50:25.057659 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057664 | orchestrator | 2026-04-09 00:50:25.057668 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-09 00:50:25.057672 | orchestrator | Thursday 09 April 2026 00:50:05 +0000 (0:00:00.808) 0:05:03.749 ******** 2026-04-09 00:50:25.057676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057694 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057717 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-09 00:50:25.057729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-09 00:50:25.057740 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057744 | orchestrator | 2026-04-09 00:50:25.057748 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-09 00:50:25.057752 | orchestrator | Thursday 09 April 2026 00:50:06 +0000 (0:00:01.071) 0:05:04.820 ******** 2026-04-09 00:50:25.057759 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.057763 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.057767 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.057771 | orchestrator | 2026-04-09 00:50:25.057775 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-09 00:50:25.057779 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:01.229) 0:05:06.050 ******** 2026-04-09 00:50:25.057783 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:25.057787 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:25.057790 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:25.057794 | orchestrator | 2026-04-09 00:50:25.057798 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-09 00:50:25.057802 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:01.945) 0:05:07.995 ******** 2026-04-09 00:50:25.057806 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057810 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057814 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057818 | orchestrator | 2026-04-09 00:50:25.057822 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-09 00:50:25.057826 | orchestrator | Thursday 09 April 2026 00:50:09 +0000 (0:00:00.269) 0:05:08.264 ******** 2026-04-09 00:50:25.057830 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057834 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057838 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057842 | orchestrator | 2026-04-09 00:50:25.057846 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-09 00:50:25.057850 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.454) 0:05:08.718 ******** 2026-04-09 00:50:25.057854 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057858 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057862 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057866 | orchestrator | 2026-04-09 00:50:25.057870 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-09 00:50:25.057873 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.266) 0:05:08.985 ******** 2026-04-09 00:50:25.057877 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057881 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057885 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057889 | orchestrator | 2026-04-09 00:50:25.057893 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-09 00:50:25.057897 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.270) 0:05:09.255 ******** 2026-04-09 00:50:25.057901 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.057905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.057911 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.057915 | orchestrator | 2026-04-09 00:50:25.057919 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-09 00:50:25.057923 | orchestrator | Thursday 09 April 2026 00:50:10 +0000 (0:00:00.254) 0:05:09.510 ******** 2026-04-09 00:50:25.057927 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:25.057931 | orchestrator | 2026-04-09 00:50:25.057935 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-09 00:50:25.057939 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:01.582) 0:05:11.092 ******** 2026-04-09 00:50:25.057943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-09 00:50:25.057976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.057984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.057988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-09 00:50:25.057992 | orchestrator | 2026-04-09 00:50:25.057996 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-09 00:50:25.058000 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:02.355) 0:05:13.448 ******** 2026-04-09 00:50:25.058004 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:50:25.058008 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.058012 | orchestrator | } 2026-04-09 00:50:25.058058 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:50:25.058062 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.058066 | orchestrator | } 2026-04-09 00:50:25.058071 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:50:25.058075 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:25.058079 | orchestrator | } 2026-04-09 00:50:25.058083 | orchestrator | 2026-04-09 00:50:25.058087 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:50:25.058094 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:00.321) 0:05:13.769 ******** 2026-04-09 00:50:25.058098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.058102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.058109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.058117 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:25.058121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.058125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.058130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.058134 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:25.058140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-09 00:50:25.058145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-09 00:50:25.058149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-09 00:50:25.058157 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:25.058161 | orchestrator | 2026-04-09 00:50:25.058165 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-09 00:50:25.058171 | orchestrator | Thursday 09 April 2026 00:50:16 +0000 (0:00:01.558) 0:05:15.328 ******** 2026-04-09 00:50:25.058175 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.058179 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.058183 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.058187 | orchestrator | 2026-04-09 00:50:25.058191 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-09 00:50:25.058195 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.893) 0:05:16.221 ******** 2026-04-09 00:50:25.058199 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.058203 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.058207 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.058211 | orchestrator | 2026-04-09 00:50:25.058215 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-09 00:50:25.058219 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.308) 0:05:16.530 ******** 2026-04-09 00:50:25.058223 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.058227 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.058231 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.058235 | orchestrator | 2026-04-09 00:50:25.058239 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-09 00:50:25.058243 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:01.068) 0:05:17.599 ******** 2026-04-09 00:50:25.058247 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.058250 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.058254 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.058258 | orchestrator | 2026-04-09 00:50:25.058262 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-09 00:50:25.058266 | orchestrator | Thursday 09 April 2026 00:50:20 +0000 (0:00:00.991) 0:05:18.591 ******** 2026-04-09 00:50:25.058270 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:25.058274 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:25.058278 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:25.058282 | orchestrator | 2026-04-09 00:50:25.058286 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-09 00:50:25.058290 | orchestrator | Thursday 09 April 2026 00:50:20 +0000 (0:00:00.903) 0:05:19.494 ******** 2026-04-09 00:50:25.058297 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jmo7nqce/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jmo7nqce/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_jmo7nqce/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jmo7nqce/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:25.058309 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6o3299q9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6o3299q9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6o3299q9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6o3299q9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:25.058319 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ss0s6e_i/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ss0s6e_i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ss0s6e_i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ss0s6e_i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fhaproxy: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:25.058327 | orchestrator | 2026-04-09 00:50:25.058331 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:25.058335 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-09 00:50:25.058340 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-09 00:50:25.058344 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-09 00:50:25.058348 | orchestrator | 2026-04-09 00:50:25.058352 | orchestrator | 2026-04-09 00:50:25.058356 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:25.058360 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:02.534) 0:05:22.028 ******** 2026-04-09 00:50:25.058364 | orchestrator | =============================================================================== 2026-04-09 00:50:25.058368 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.49s 2026-04-09 00:50:25.058371 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.40s 2026-04-09 00:50:25.058375 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.23s 2026-04-09 00:50:25.058393 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.05s 2026-04-09 00:50:25.058397 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.94s 2026-04-09 00:50:25.058401 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.70s 2026-04-09 00:50:25.058405 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.70s 2026-04-09 00:50:25.058409 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.54s 2026-04-09 00:50:25.058413 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.31s 2026-04-09 00:50:25.058417 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.08s 2026-04-09 00:50:25.058421 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.71s 2026-04-09 00:50:25.058429 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.71s 2026-04-09 00:50:25.058433 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.63s 2026-04-09 00:50:25.058440 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.60s 2026-04-09 00:50:25.058444 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.59s 2026-04-09 00:50:25.058448 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.54s 2026-04-09 00:50:25.058452 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.52s 2026-04-09 00:50:25.058456 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.49s 2026-04-09 00:50:25.058459 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.47s 2026-04-09 00:50:25.058463 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.43s 2026-04-09 00:50:25.058468 | orchestrator | 2026-04-09 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:28.108197 | orchestrator | 2026-04-09 00:50:28 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:28.108316 | orchestrator | 2026-04-09 00:50:28 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:28.108333 | orchestrator | 2026-04-09 00:50:28 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:28.108348 | orchestrator | 2026-04-09 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:31.144270 | orchestrator | 2026-04-09 00:50:31 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:31.144818 | orchestrator | 2026-04-09 00:50:31 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:31.145738 | orchestrator | 2026-04-09 00:50:31 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:31.145765 | orchestrator | 2026-04-09 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:34.182646 | orchestrator | 2026-04-09 00:50:34 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:34.182745 | orchestrator | 2026-04-09 00:50:34 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:34.183276 | orchestrator | 2026-04-09 00:50:34 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:34.183349 | orchestrator | 2026-04-09 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:37.219919 | orchestrator | 2026-04-09 00:50:37 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:37.222718 | orchestrator | 2026-04-09 00:50:37 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:37.224753 | orchestrator | 2026-04-09 00:50:37 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:37.225082 | orchestrator | 2026-04-09 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:40.291599 | orchestrator | 2026-04-09 00:50:40 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:40.291953 | orchestrator | 2026-04-09 00:50:40 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:40.292835 | orchestrator | 2026-04-09 00:50:40 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:40.292874 | orchestrator | 2026-04-09 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:43.332341 | orchestrator | 2026-04-09 00:50:43 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:43.332902 | orchestrator | 2026-04-09 00:50:43 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:43.334629 | orchestrator | 2026-04-09 00:50:43 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:43.334682 | orchestrator | 2026-04-09 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:46.368262 | orchestrator | 2026-04-09 00:50:46 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:46.369199 | orchestrator | 2026-04-09 00:50:46 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:46.371808 | orchestrator | 2026-04-09 00:50:46 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:46.371872 | orchestrator | 2026-04-09 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:49.407470 | orchestrator | 2026-04-09 00:50:49 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:49.408867 | orchestrator | 2026-04-09 00:50:49 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:49.409571 | orchestrator | 2026-04-09 00:50:49 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state STARTED 2026-04-09 00:50:49.409611 | orchestrator | 2026-04-09 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:52.459968 | orchestrator | 2026-04-09 00:50:52 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:52.460061 | orchestrator | 2026-04-09 00:50:52 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:52.461923 | orchestrator | 2026-04-09 00:50:52.462000 | orchestrator | 2026-04-09 00:50:52 | INFO  | Task 121a5cfb-89de-4a54-b9cd-f76f1e91b68b is in state SUCCESS 2026-04-09 00:50:52.462790 | orchestrator | 2026-04-09 00:50:52.462861 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:50:52.462872 | orchestrator | 2026-04-09 00:50:52.462879 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:50:52.462930 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.358) 0:00:00.358 ******** 2026-04-09 00:50:52.462939 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:50:52.462948 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:50:52.462954 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:50:52.463118 | orchestrator | 2026-04-09 00:50:52.463132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:50:52.463139 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.276) 0:00:00.634 ******** 2026-04-09 00:50:52.463147 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-09 00:50:52.463155 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-09 00:50:52.463162 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-09 00:50:52.463169 | orchestrator | 2026-04-09 00:50:52.463176 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-09 00:50:52.463183 | orchestrator | 2026-04-09 00:50:52.463190 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:50:52.463198 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.274) 0:00:00.909 ******** 2026-04-09 00:50:52.463205 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:52.463212 | orchestrator | 2026-04-09 00:50:52.463220 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-09 00:50:52.463227 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:00.621) 0:00:01.531 ******** 2026-04-09 00:50:52.463234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:50:52.463264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:50:52.463271 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-09 00:50:52.463278 | orchestrator | 2026-04-09 00:50:52.463286 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-09 00:50:52.463338 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.988) 0:00:02.519 ******** 2026-04-09 00:50:52.463352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463441 | orchestrator | 2026-04-09 00:50:52.463449 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:50:52.463457 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:01.304) 0:00:03.824 ******** 2026-04-09 00:50:52.463464 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:50:52.463471 | orchestrator | 2026-04-09 00:50:52.463563 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-09 00:50:52.463571 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.503) 0:00:04.327 ******** 2026-04-09 00:50:52.463579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.463651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.463724 | orchestrator | 2026-04-09 00:50:52.463732 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-09 00:50:52.463741 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:02.610) 0:00:06.938 ******** 2026-04-09 00:50:52.463749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463795 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.463804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463813 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:52.463822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463851 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:52.463859 | orchestrator | 2026-04-09 00:50:52.463867 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-09 00:50:52.463876 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.951) 0:00:07.890 ******** 2026-04-09 00:50:52.463884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463904 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.463918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.463949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463958 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:52.463967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.463976 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:52.463984 | orchestrator | 2026-04-09 00:50:52.463993 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-09 00:50:52.464002 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:00.861) 0:00:08.752 ******** 2026-04-09 00:50:52.464021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464103 | orchestrator | 2026-04-09 00:50:52.464111 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-09 00:50:52.464120 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:02.491) 0:00:11.243 ******** 2026-04-09 00:50:52.464127 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:52.464134 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:52.464142 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:52.464149 | orchestrator | 2026-04-09 00:50:52.464155 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-09 00:50:52.464163 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:02.977) 0:00:14.221 ******** 2026-04-09 00:50:52.464170 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:50:52.464177 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:50:52.464184 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:50:52.464191 | orchestrator | 2026-04-09 00:50:52.464198 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-09 00:50:52.464205 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:01.570) 0:00:15.792 ******** 2026-04-09 00:50:52.464213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:50:52.464251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-09 00:50:52.464305 | orchestrator | 2026-04-09 00:50:52.464314 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-09 00:50:52.464322 | orchestrator | Thursday 09 April 2026 00:50:44 +0000 (0:00:02.045) 0:00:17.838 ******** 2026-04-09 00:50:52.464329 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:50:52.464336 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:52.464343 | orchestrator | } 2026-04-09 00:50:52.464351 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:50:52.464359 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:52.464365 | orchestrator | } 2026-04-09 00:50:52.464371 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:50:52.464376 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:50:52.464382 | orchestrator | } 2026-04-09 00:50:52.464388 | orchestrator | 2026-04-09 00:50:52.464394 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:50:52.464400 | orchestrator | Thursday 09 April 2026 00:50:45 +0000 (0:00:00.508) 0:00:18.347 ******** 2026-04-09 00:50:52.464406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.464413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.464426 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.464436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.464450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.464458 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:52.464466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:50:52.464474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release//opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-09 00:50:52.464486 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:52.464494 | orchestrator | 2026-04-09 00:50:52.464501 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-09 00:50:52.464508 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.805) 0:00:19.153 ******** 2026-04-09 00:50:52.464515 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.464522 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:50:52.464530 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:50:52.464537 | orchestrator | 2026-04-09 00:50:52.464543 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:50:52.464550 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.274) 0:00:19.427 ******** 2026-04-09 00:50:52.464557 | orchestrator | 2026-04-09 00:50:52.464564 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:50:52.464572 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.063) 0:00:19.490 ******** 2026-04-09 00:50:52.464578 | orchestrator | 2026-04-09 00:50:52.464588 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-09 00:50:52.464595 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.059) 0:00:19.550 ******** 2026-04-09 00:50:52.464602 | orchestrator | 2026-04-09 00:50:52.464610 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-09 00:50:52.464617 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.062) 0:00:19.612 ******** 2026-04-09 00:50:52.464624 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.464631 | orchestrator | 2026-04-09 00:50:52.464638 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-09 00:50:52.464648 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:00.581) 0:00:20.194 ******** 2026-04-09 00:50:52.464655 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:50:52.464663 | orchestrator | 2026-04-09 00:50:52.464669 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-09 00:50:52.464677 | orchestrator | Thursday 09 April 2026 00:50:47 +0000 (0:00:00.214) 0:00:20.408 ******** 2026-04-09 00:50:52.464684 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ai_mjhe0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ai_mjhe0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ai_mjhe0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ai_mjhe0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:52.464707 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_udlbuylc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_udlbuylc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_udlbuylc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_udlbuylc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:52.464715 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qahmsi5k/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qahmsi5k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_qahmsi5k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qahmsi5k/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fopensearch: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:50:52.464727 | orchestrator | 2026-04-09 00:50:52.464735 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:50:52.464746 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-09 00:50:52.464755 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:50:52.464763 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:50:52.464771 | orchestrator | 2026-04-09 00:50:52.464777 | orchestrator | 2026-04-09 00:50:52.464787 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:50:52.464794 | orchestrator | Thursday 09 April 2026 00:50:50 +0000 (0:00:03.367) 0:00:23.776 ******** 2026-04-09 00:50:52.464800 | orchestrator | =============================================================================== 2026-04-09 00:50:52.464806 | orchestrator | opensearch : Restart opensearch container ------------------------------- 3.37s 2026-04-09 00:50:52.464812 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.98s 2026-04-09 00:50:52.464818 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.61s 2026-04-09 00:50:52.464825 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.49s 2026-04-09 00:50:52.464831 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.05s 2026-04-09 00:50:52.464837 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.57s 2026-04-09 00:50:52.464843 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.30s 2026-04-09 00:50:52.464850 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.99s 2026-04-09 00:50:52.464856 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.95s 2026-04-09 00:50:52.464862 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.86s 2026-04-09 00:50:52.464872 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.81s 2026-04-09 00:50:52.464879 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-09 00:50:52.464886 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.58s 2026-04-09 00:50:52.464893 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.51s 2026-04-09 00:50:52.464899 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-09 00:50:52.464906 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-09 00:50:52.464913 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.27s 2026-04-09 00:50:52.464920 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-09 00:50:52.464926 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.22s 2026-04-09 00:50:52.464933 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.18s 2026-04-09 00:50:52.464939 | orchestrator | 2026-04-09 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:55.501877 | orchestrator | 2026-04-09 00:50:55 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:55.503906 | orchestrator | 2026-04-09 00:50:55 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:55.504097 | orchestrator | 2026-04-09 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:50:58.542109 | orchestrator | 2026-04-09 00:50:58 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:50:58.543686 | orchestrator | 2026-04-09 00:50:58 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:50:58.543735 | orchestrator | 2026-04-09 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:01.576482 | orchestrator | 2026-04-09 00:51:01 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:01.578344 | orchestrator | 2026-04-09 00:51:01 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:01.578415 | orchestrator | 2026-04-09 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:04.615042 | orchestrator | 2026-04-09 00:51:04 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:04.616123 | orchestrator | 2026-04-09 00:51:04 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:04.616165 | orchestrator | 2026-04-09 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:07.658845 | orchestrator | 2026-04-09 00:51:07 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:07.660110 | orchestrator | 2026-04-09 00:51:07 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:07.660178 | orchestrator | 2026-04-09 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:10.696025 | orchestrator | 2026-04-09 00:51:10 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:10.698219 | orchestrator | 2026-04-09 00:51:10 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:10.698260 | orchestrator | 2026-04-09 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:13.736862 | orchestrator | 2026-04-09 00:51:13 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:13.739571 | orchestrator | 2026-04-09 00:51:13 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:13.739638 | orchestrator | 2026-04-09 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:16.780739 | orchestrator | 2026-04-09 00:51:16 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:16.781385 | orchestrator | 2026-04-09 00:51:16 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:16.781423 | orchestrator | 2026-04-09 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:19.822255 | orchestrator | 2026-04-09 00:51:19 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:19.826307 | orchestrator | 2026-04-09 00:51:19 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:19.826388 | orchestrator | 2026-04-09 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:22.872128 | orchestrator | 2026-04-09 00:51:22 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:22.874824 | orchestrator | 2026-04-09 00:51:22 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:22.875998 | orchestrator | 2026-04-09 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:25.912718 | orchestrator | 2026-04-09 00:51:25 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:25.914310 | orchestrator | 2026-04-09 00:51:25 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:25.914384 | orchestrator | 2026-04-09 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:28.958621 | orchestrator | 2026-04-09 00:51:28 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:28.960571 | orchestrator | 2026-04-09 00:51:28 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:28.960671 | orchestrator | 2026-04-09 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:32.015338 | orchestrator | 2026-04-09 00:51:32 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:32.015420 | orchestrator | 2026-04-09 00:51:32 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:32.015428 | orchestrator | 2026-04-09 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:35.069076 | orchestrator | 2026-04-09 00:51:35 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state STARTED 2026-04-09 00:51:35.070978 | orchestrator | 2026-04-09 00:51:35 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:35.071428 | orchestrator | 2026-04-09 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:38.116093 | orchestrator | 2026-04-09 00:51:38 | INFO  | Task c43e6330-055a-44f8-815f-0cfdb83e12be is in state SUCCESS 2026-04-09 00:51:38.116862 | orchestrator | 2026-04-09 00:51:38.116915 | orchestrator | 2026-04-09 00:51:38.116922 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-09 00:51:38.116928 | orchestrator | 2026-04-09 00:51:38.116934 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-09 00:51:38.116939 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-09 00:51:38.116944 | orchestrator | ok: [localhost] => { 2026-04-09 00:51:38.116950 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-09 00:51:38.116955 | orchestrator | } 2026-04-09 00:51:38.116960 | orchestrator | 2026-04-09 00:51:38.116964 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-09 00:51:38.116969 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.054) 0:00:00.157 ******** 2026-04-09 00:51:38.117060 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-09 00:51:38.117068 | orchestrator | ...ignoring 2026-04-09 00:51:38.117073 | orchestrator | 2026-04-09 00:51:38.117079 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-09 00:51:38.117099 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:03.087) 0:00:03.245 ******** 2026-04-09 00:51:38.117103 | orchestrator | skipping: [localhost] 2026-04-09 00:51:38.117107 | orchestrator | 2026-04-09 00:51:38.117121 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-09 00:51:38.117125 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.067) 0:00:03.312 ******** 2026-04-09 00:51:38.117129 | orchestrator | ok: [localhost] 2026-04-09 00:51:38.117133 | orchestrator | 2026-04-09 00:51:38.117137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:51:38.117141 | orchestrator | 2026-04-09 00:51:38.117145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:51:38.117149 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.196) 0:00:03.508 ******** 2026-04-09 00:51:38.117153 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:38.117157 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:38.117161 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:38.117165 | orchestrator | 2026-04-09 00:51:38.117169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:51:38.117173 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.312) 0:00:03.821 ******** 2026-04-09 00:51:38.117177 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-09 00:51:38.117181 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-09 00:51:38.117186 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-09 00:51:38.117189 | orchestrator | 2026-04-09 00:51:38.117194 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-09 00:51:38.117198 | orchestrator | 2026-04-09 00:51:38.117201 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-09 00:51:38.117205 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.377) 0:00:04.199 ******** 2026-04-09 00:51:38.117209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:51:38.117213 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:51:38.117217 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:51:38.117221 | orchestrator | 2026-04-09 00:51:38.117265 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:51:38.117271 | orchestrator | Thursday 09 April 2026 00:50:31 +0000 (0:00:00.353) 0:00:04.553 ******** 2026-04-09 00:51:38.117275 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:38.117280 | orchestrator | 2026-04-09 00:51:38.117284 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-09 00:51:38.117288 | orchestrator | Thursday 09 April 2026 00:50:32 +0000 (0:00:00.697) 0:00:05.250 ******** 2026-04-09 00:51:38.117310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117360 | orchestrator | 2026-04-09 00:51:38.117366 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-09 00:51:38.117372 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:03.298) 0:00:08.549 ******** 2026-04-09 00:51:38.117378 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.117384 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.117391 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:38.117397 | orchestrator | 2026-04-09 00:51:38.117403 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-09 00:51:38.117409 | orchestrator | Thursday 09 April 2026 00:50:36 +0000 (0:00:00.587) 0:00:09.136 ******** 2026-04-09 00:51:38.117415 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.117421 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.117427 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:38.117433 | orchestrator | 2026-04-09 00:51:38.117440 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-09 00:51:38.117446 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:01.360) 0:00:10.497 ******** 2026-04-09 00:51:38.117457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.117495 | orchestrator | 2026-04-09 00:51:38.117501 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-09 00:51:38.117505 | orchestrator | Thursday 09 April 2026 00:50:41 +0000 (0:00:03.786) 0:00:14.283 ******** 2026-04-09 00:51:38.117509 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.117833 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.117847 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:38.117853 | orchestrator | 2026-04-09 00:51:38.117860 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-09 00:51:38.117866 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:01.304) 0:00:15.588 ******** 2026-04-09 00:51:38.117873 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:38.117887 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:38.117894 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:38.117900 | orchestrator | 2026-04-09 00:51:38.117906 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:51:38.117912 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:03.693) 0:00:19.282 ******** 2026-04-09 00:51:38.117919 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:51:38.117926 | orchestrator | 2026-04-09 00:51:38.117933 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-09 00:51:38.117940 | orchestrator | Thursday 09 April 2026 00:50:46 +0000 (0:00:00.464) 0:00:19.747 ******** 2026-04-09 00:51:38.117957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.117964 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.117976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.117988 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118056 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118069 | orchestrator | 2026-04-09 00:51:38.118076 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-09 00:51:38.118083 | orchestrator | Thursday 09 April 2026 00:50:49 +0000 (0:00:02.898) 0:00:22.646 ******** 2026-04-09 00:51:38.118093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118106 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118125 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118165 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118172 | orchestrator | 2026-04-09 00:51:38.118178 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-09 00:51:38.118185 | orchestrator | Thursday 09 April 2026 00:50:51 +0000 (0:00:01.945) 0:00:24.592 ******** 2026-04-09 00:51:38.118204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118212 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118296 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118310 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118316 | orchestrator | 2026-04-09 00:51:38.118323 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-09 00:51:38.118333 | orchestrator | Thursday 09 April 2026 00:50:53 +0000 (0:00:02.098) 0:00:26.690 ******** 2026-04-09 00:51:38.118344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.118356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.118372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-09 00:51:38.118380 | orchestrator | 2026-04-09 00:51:38.118386 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-09 00:51:38.118397 | orchestrator | Thursday 09 April 2026 00:50:56 +0000 (0:00:02.428) 0:00:29.118 ******** 2026-04-09 00:51:38.118404 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:51:38.118410 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:38.118417 | orchestrator | } 2026-04-09 00:51:38.118423 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:51:38.118429 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:38.118439 | orchestrator | } 2026-04-09 00:51:38.118446 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:51:38.118453 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:51:38.118460 | orchestrator | } 2026-04-09 00:51:38.118466 | orchestrator | 2026-04-09 00:51:38.118473 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:51:38.118479 | orchestrator | Thursday 09 April 2026 00:50:56 +0000 (0:00:00.281) 0:00:29.399 ******** 2026-04-09 00:51:38.118487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118494 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118523 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.118537 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118544 | orchestrator | 2026-04-09 00:51:38.118551 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-09 00:51:38.118558 | orchestrator | Thursday 09 April 2026 00:50:58 +0000 (0:00:02.003) 0:00:31.402 ******** 2026-04-09 00:51:38.118565 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118571 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118577 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118583 | orchestrator | 2026-04-09 00:51:38.118590 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-09 00:51:38.118597 | orchestrator | Thursday 09 April 2026 00:50:58 +0000 (0:00:00.364) 0:00:31.767 ******** 2026-04-09 00:51:38.118604 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118610 | orchestrator | 2026-04-09 00:51:38.118620 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-09 00:51:38.118627 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:00.100) 0:00:31.867 ******** 2026-04-09 00:51:38.118631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118636 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118640 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118645 | orchestrator | 2026-04-09 00:51:38.118649 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-09 00:51:38.118654 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:00.283) 0:00:32.150 ******** 2026-04-09 00:51:38.118663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118668 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118677 | orchestrator | 2026-04-09 00:51:38.118682 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-09 00:51:38.118687 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:00.277) 0:00:32.428 ******** 2026-04-09 00:51:38.118691 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118695 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118700 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118705 | orchestrator | 2026-04-09 00:51:38.118709 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-09 00:51:38.118713 | orchestrator | Thursday 09 April 2026 00:50:59 +0000 (0:00:00.274) 0:00:32.702 ******** 2026-04-09 00:51:38.118719 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118723 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118727 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118731 | orchestrator | 2026-04-09 00:51:38.118735 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-09 00:51:38.118739 | orchestrator | Thursday 09 April 2026 00:51:00 +0000 (0:00:00.409) 0:00:33.112 ******** 2026-04-09 00:51:38.118743 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118751 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118755 | orchestrator | 2026-04-09 00:51:38.118759 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-09 00:51:38.118763 | orchestrator | Thursday 09 April 2026 00:51:00 +0000 (0:00:00.274) 0:00:33.386 ******** 2026-04-09 00:51:38.118766 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118770 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118774 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118778 | orchestrator | 2026-04-09 00:51:38.118782 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-09 00:51:38.118786 | orchestrator | Thursday 09 April 2026 00:51:00 +0000 (0:00:00.277) 0:00:33.664 ******** 2026-04-09 00:51:38.118790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:51:38.118794 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:51:38.118798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:51:38.118810 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:51:38.118819 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:51:38.118823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:51:38.118827 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118831 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:51:38.118835 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:51:38.118838 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:51:38.118842 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118846 | orchestrator | 2026-04-09 00:51:38.118850 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-09 00:51:38.118854 | orchestrator | Thursday 09 April 2026 00:51:01 +0000 (0:00:00.308) 0:00:33.973 ******** 2026-04-09 00:51:38.118858 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118862 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118866 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118870 | orchestrator | 2026-04-09 00:51:38.118874 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-09 00:51:38.118878 | orchestrator | Thursday 09 April 2026 00:51:01 +0000 (0:00:00.394) 0:00:34.367 ******** 2026-04-09 00:51:38.118882 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118889 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118893 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118897 | orchestrator | 2026-04-09 00:51:38.118901 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-09 00:51:38.118905 | orchestrator | Thursday 09 April 2026 00:51:01 +0000 (0:00:00.297) 0:00:34.664 ******** 2026-04-09 00:51:38.118909 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118913 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118917 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118921 | orchestrator | 2026-04-09 00:51:38.118924 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-09 00:51:38.118928 | orchestrator | Thursday 09 April 2026 00:51:02 +0000 (0:00:00.272) 0:00:34.937 ******** 2026-04-09 00:51:38.118932 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118936 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118940 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118944 | orchestrator | 2026-04-09 00:51:38.118948 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-09 00:51:38.118952 | orchestrator | Thursday 09 April 2026 00:51:02 +0000 (0:00:00.289) 0:00:35.226 ******** 2026-04-09 00:51:38.118956 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118960 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118964 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118967 | orchestrator | 2026-04-09 00:51:38.118971 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-09 00:51:38.118975 | orchestrator | Thursday 09 April 2026 00:51:02 +0000 (0:00:00.376) 0:00:35.603 ******** 2026-04-09 00:51:38.118979 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.118983 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.118990 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.118994 | orchestrator | 2026-04-09 00:51:38.118998 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-09 00:51:38.119002 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.269) 0:00:35.873 ******** 2026-04-09 00:51:38.119006 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119013 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119019 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119025 | orchestrator | 2026-04-09 00:51:38.119034 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-09 00:51:38.119043 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.279) 0:00:36.152 ******** 2026-04-09 00:51:38.119049 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119055 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119061 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119066 | orchestrator | 2026-04-09 00:51:38.119073 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-09 00:51:38.119079 | orchestrator | Thursday 09 April 2026 00:51:03 +0000 (0:00:00.284) 0:00:36.436 ******** 2026-04-09 00:51:38.119090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119102 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119122 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119141 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119145 | orchestrator | 2026-04-09 00:51:38.119149 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-09 00:51:38.119153 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:01.855) 0:00:38.291 ******** 2026-04-09 00:51:38.119157 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119162 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119168 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119174 | orchestrator | 2026-04-09 00:51:38.119179 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-09 00:51:38.119220 | orchestrator | Thursday 09 April 2026 00:51:05 +0000 (0:00:00.383) 0:00:38.675 ******** 2026-04-09 00:51:38.119255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119263 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119287 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-09 00:51:38.119305 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119311 | orchestrator | 2026-04-09 00:51:38.119318 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-09 00:51:38.119324 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:01.800) 0:00:40.475 ******** 2026-04-09 00:51:38.119330 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119334 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119338 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119342 | orchestrator | 2026-04-09 00:51:38.119346 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-09 00:51:38.119350 | orchestrator | Thursday 09 April 2026 00:51:07 +0000 (0:00:00.275) 0:00:40.750 ******** 2026-04-09 00:51:38.119354 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119358 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119362 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119366 | orchestrator | 2026-04-09 00:51:38.119370 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-09 00:51:38.119377 | orchestrator | Thursday 09 April 2026 00:51:08 +0000 (0:00:00.294) 0:00:41.045 ******** 2026-04-09 00:51:38.119381 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119385 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119392 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119396 | orchestrator | 2026-04-09 00:51:38.119400 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-09 00:51:38.119404 | orchestrator | Thursday 09 April 2026 00:51:08 +0000 (0:00:00.424) 0:00:41.469 ******** 2026-04-09 00:51:38.119408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119412 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119416 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119420 | orchestrator | 2026-04-09 00:51:38.119424 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-09 00:51:38.119428 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.431) 0:00:41.901 ******** 2026-04-09 00:51:38.119432 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119439 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119443 | orchestrator | 2026-04-09 00:51:38.119447 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-09 00:51:38.119451 | orchestrator | Thursday 09 April 2026 00:51:09 +0000 (0:00:00.250) 0:00:42.152 ******** 2026-04-09 00:51:38.119455 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:38.119459 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:38.119463 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:51:38.119467 | orchestrator | 2026-04-09 00:51:38.119471 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-09 00:51:38.119475 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.985) 0:00:43.137 ******** 2026-04-09 00:51:38.119479 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:38.119483 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:38.119487 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:38.119491 | orchestrator | 2026-04-09 00:51:38.119495 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-09 00:51:38.119499 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.292) 0:00:43.429 ******** 2026-04-09 00:51:38.119503 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:38.119507 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:38.119511 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:38.119517 | orchestrator | 2026-04-09 00:51:38.119523 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-09 00:51:38.119529 | orchestrator | Thursday 09 April 2026 00:51:10 +0000 (0:00:00.272) 0:00:43.702 ******** 2026-04-09 00:51:38.119536 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-09 00:51:38.119543 | orchestrator | ...ignoring 2026-04-09 00:51:38.119549 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-09 00:51:38.119555 | orchestrator | ...ignoring 2026-04-09 00:51:38.119563 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-09 00:51:38.119567 | orchestrator | ...ignoring 2026-04-09 00:51:38.119571 | orchestrator | 2026-04-09 00:51:38.119575 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-09 00:51:38.119579 | orchestrator | Thursday 09 April 2026 00:51:21 +0000 (0:00:10.704) 0:00:54.407 ******** 2026-04-09 00:51:38.119583 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:38.119587 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:38.119591 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:38.119595 | orchestrator | 2026-04-09 00:51:38.119599 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-09 00:51:38.119607 | orchestrator | Thursday 09 April 2026 00:51:21 +0000 (0:00:00.387) 0:00:54.794 ******** 2026-04-09 00:51:38.119611 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119615 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119619 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119623 | orchestrator | 2026-04-09 00:51:38.119627 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-09 00:51:38.119631 | orchestrator | Thursday 09 April 2026 00:51:22 +0000 (0:00:00.286) 0:00:55.080 ******** 2026-04-09 00:51:38.119635 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119639 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119642 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119646 | orchestrator | 2026-04-09 00:51:38.119650 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-09 00:51:38.119654 | orchestrator | Thursday 09 April 2026 00:51:22 +0000 (0:00:00.276) 0:00:55.357 ******** 2026-04-09 00:51:38.119658 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119670 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119674 | orchestrator | 2026-04-09 00:51:38.119678 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-09 00:51:38.119682 | orchestrator | Thursday 09 April 2026 00:51:22 +0000 (0:00:00.269) 0:00:55.627 ******** 2026-04-09 00:51:38.119686 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:51:38.119690 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:51:38.119694 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:51:38.119698 | orchestrator | 2026-04-09 00:51:38.119702 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-09 00:51:38.119706 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:00.270) 0:00:55.898 ******** 2026-04-09 00:51:38.119726 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:51:38.119730 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119739 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119743 | orchestrator | 2026-04-09 00:51:38.119747 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:51:38.119751 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:00.421) 0:00:56.319 ******** 2026-04-09 00:51:38.119755 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119759 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119763 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-09 00:51:38.119767 | orchestrator | 2026-04-09 00:51:38.119775 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-09 00:51:38.119779 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:00.323) 0:00:56.643 ******** 2026-04-09 00:51:38.119784 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_iogcgk2f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_iogcgk2f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_iogcgk2f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:51:38.119792 | orchestrator | 2026-04-09 00:51:38.119796 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-09 00:51:38.119800 | orchestrator | Thursday 09 April 2026 00:51:27 +0000 (0:00:03.478) 0:01:00.121 ******** 2026-04-09 00:51:38.119804 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119808 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119812 | orchestrator | 2026-04-09 00:51:38.119816 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-09 00:51:38.119820 | orchestrator | Thursday 09 April 2026 00:51:27 +0000 (0:00:00.542) 0:01:00.664 ******** 2026-04-09 00:51:38.119824 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:51:38.119828 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:51:38.119831 | orchestrator | 2026-04-09 00:51:38.119835 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-09 00:51:38.119839 | orchestrator | Thursday 09 April 2026 00:51:28 +0000 (0:00:00.196) 0:01:00.861 ******** 2026-04-09 00:51:38.119843 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:51:38.119847 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:51:38.119854 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-09 00:51:38.119858 | orchestrator | 2026-04-09 00:51:38.119862 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-09 00:51:38.119866 | orchestrator | skipping: no hosts matched 2026-04-09 00:51:38.119870 | orchestrator | 2026-04-09 00:51:38.119874 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-09 00:51:38.119878 | orchestrator | 2026-04-09 00:51:38.119882 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-09 00:51:38.119886 | orchestrator | Thursday 09 April 2026 00:51:28 +0000 (0:00:00.213) 0:01:01.074 ******** 2026-04-09 00:51:38.119893 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_b9aulvbg/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_b9aulvbg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_b9aulvbg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_b9aulvbg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fmariadb-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:51:38.119902 | orchestrator | 2026-04-09 00:51:38.119906 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:51:38.119910 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-09 00:51:38.119914 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-09 00:51:38.119920 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-09 00:51:38.119924 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-09 00:51:38.119930 | orchestrator | 2026-04-09 00:51:38.119934 | orchestrator | 2026-04-09 00:51:38.119938 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:51:38.119942 | orchestrator | Thursday 09 April 2026 00:51:37 +0000 (0:00:09.636) 0:01:10.711 ******** 2026-04-09 00:51:38.119946 | orchestrator | =============================================================================== 2026-04-09 00:51:38.119950 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.70s 2026-04-09 00:51:38.119953 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.64s 2026-04-09 00:51:38.119960 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.79s 2026-04-09 00:51:38.119964 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.69s 2026-04-09 00:51:38.119968 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 3.48s 2026-04-09 00:51:38.119972 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.30s 2026-04-09 00:51:38.119976 | orchestrator | Check MariaDB service --------------------------------------------------- 3.09s 2026-04-09 00:51:38.119982 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.90s 2026-04-09 00:51:38.119988 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.43s 2026-04-09 00:51:38.119994 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.10s 2026-04-09 00:51:38.120000 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.00s 2026-04-09 00:51:38.120006 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 1.95s 2026-04-09 00:51:38.120012 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 1.86s 2026-04-09 00:51:38.120022 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 1.80s 2026-04-09 00:51:38.120029 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.36s 2026-04-09 00:51:38.120039 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.31s 2026-04-09 00:51:38.120046 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 0.99s 2026-04-09 00:51:38.120052 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.70s 2026-04-09 00:51:38.120058 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.59s 2026-04-09 00:51:38.120065 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-04-09 00:51:38.120072 | orchestrator | 2026-04-09 00:51:38 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:38.120078 | orchestrator | 2026-04-09 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:41.160007 | orchestrator | 2026-04-09 00:51:41 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:41.162551 | orchestrator | 2026-04-09 00:51:41 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:41.163835 | orchestrator | 2026-04-09 00:51:41 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:41.164144 | orchestrator | 2026-04-09 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:44.200698 | orchestrator | 2026-04-09 00:51:44 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:44.202904 | orchestrator | 2026-04-09 00:51:44 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:44.204902 | orchestrator | 2026-04-09 00:51:44 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:44.208166 | orchestrator | 2026-04-09 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:47.252879 | orchestrator | 2026-04-09 00:51:47 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:47.254710 | orchestrator | 2026-04-09 00:51:47 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:47.257122 | orchestrator | 2026-04-09 00:51:47 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:47.257521 | orchestrator | 2026-04-09 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:50.294625 | orchestrator | 2026-04-09 00:51:50 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:50.294847 | orchestrator | 2026-04-09 00:51:50 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:50.300164 | orchestrator | 2026-04-09 00:51:50 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:50.300277 | orchestrator | 2026-04-09 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:53.341106 | orchestrator | 2026-04-09 00:51:53 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:53.341440 | orchestrator | 2026-04-09 00:51:53 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:53.345300 | orchestrator | 2026-04-09 00:51:53 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:53.345389 | orchestrator | 2026-04-09 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:56.380597 | orchestrator | 2026-04-09 00:51:56 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:56.380959 | orchestrator | 2026-04-09 00:51:56 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:56.382901 | orchestrator | 2026-04-09 00:51:56 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:56.382943 | orchestrator | 2026-04-09 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:51:59.416962 | orchestrator | 2026-04-09 00:51:59 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:51:59.419010 | orchestrator | 2026-04-09 00:51:59 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:51:59.420913 | orchestrator | 2026-04-09 00:51:59 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:51:59.420956 | orchestrator | 2026-04-09 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:02.450506 | orchestrator | 2026-04-09 00:52:02 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:02.453601 | orchestrator | 2026-04-09 00:52:02 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:52:02.453672 | orchestrator | 2026-04-09 00:52:02 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:02.453679 | orchestrator | 2026-04-09 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:05.480826 | orchestrator | 2026-04-09 00:52:05 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:05.480967 | orchestrator | 2026-04-09 00:52:05 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:52:05.484541 | orchestrator | 2026-04-09 00:52:05 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:05.484639 | orchestrator | 2026-04-09 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:08.523275 | orchestrator | 2026-04-09 00:52:08 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:08.525086 | orchestrator | 2026-04-09 00:52:08 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state STARTED 2026-04-09 00:52:08.526342 | orchestrator | 2026-04-09 00:52:08 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:08.526541 | orchestrator | 2026-04-09 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:11.567815 | orchestrator | 2026-04-09 00:52:11 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:11.567932 | orchestrator | 2026-04-09 00:52:11 | INFO  | Task 3862cdb1-77cb-4519-bbaa-519bf7f2848f is in state SUCCESS 2026-04-09 00:52:11.568684 | orchestrator | 2026-04-09 00:52:11.568726 | orchestrator | 2026-04-09 00:52:11.568738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:52:11.568750 | orchestrator | 2026-04-09 00:52:11.568760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:52:11.568771 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-09 00:52:11.568781 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.568864 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.568884 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.568901 | orchestrator | 2026-04-09 00:52:11.568920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:52:11.568937 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.264) 0:00:00.542 ******** 2026-04-09 00:52:11.569395 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-09 00:52:11.569417 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-09 00:52:11.569435 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-09 00:52:11.569487 | orchestrator | 2026-04-09 00:52:11.569505 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-09 00:52:11.569515 | orchestrator | 2026-04-09 00:52:11.569526 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:52:11.569536 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.292) 0:00:00.834 ******** 2026-04-09 00:52:11.569546 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:52:11.569557 | orchestrator | 2026-04-09 00:52:11.569597 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-09 00:52:11.569608 | orchestrator | Thursday 09 April 2026 00:51:42 +0000 (0:00:00.542) 0:00:01.376 ******** 2026-04-09 00:52:11.569645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.569699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.569776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.569797 | orchestrator | 2026-04-09 00:52:11.569814 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-09 00:52:11.569830 | orchestrator | Thursday 09 April 2026 00:51:43 +0000 (0:00:01.489) 0:00:02.866 ******** 2026-04-09 00:52:11.569847 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.569863 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.569878 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.569888 | orchestrator | 2026-04-09 00:52:11.569911 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:52:11.569921 | orchestrator | Thursday 09 April 2026 00:51:43 +0000 (0:00:00.297) 0:00:03.163 ******** 2026-04-09 00:52:11.569939 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:52:11.569949 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:52:11.569959 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:52:11.569969 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:52:11.569981 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:52:11.569993 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:52:11.570004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:52:11.570122 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:52:11.570151 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:52:11.570200 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:52:11.570218 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:52:11.570234 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:52:11.570251 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:52:11.570262 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:52:11.570272 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:52:11.570282 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:52:11.570292 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-09 00:52:11.570302 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-09 00:52:11.570311 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-09 00:52:11.570321 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-09 00:52:11.570331 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-09 00:52:11.570340 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-09 00:52:11.570350 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-09 00:52:11.570360 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-09 00:52:11.570371 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-09 00:52:11.570383 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-09 00:52:11.570394 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-09 00:52:11.570404 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-09 00:52:11.570414 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-09 00:52:11.570424 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-09 00:52:11.570434 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-09 00:52:11.570458 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-09 00:52:11.570468 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-09 00:52:11.570479 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-09 00:52:11.570489 | orchestrator | 2026-04-09 00:52:11.570499 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.570509 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.638) 0:00:03.802 ******** 2026-04-09 00:52:11.570519 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.570529 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.570549 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.570560 | orchestrator | 2026-04-09 00:52:11.570570 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.570580 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.341) 0:00:04.144 ******** 2026-04-09 00:52:11.570590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.570601 | orchestrator | 2026-04-09 00:52:11.570611 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.570621 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.104) 0:00:04.248 ******** 2026-04-09 00:52:11.570631 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.570641 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.570650 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.570660 | orchestrator | 2026-04-09 00:52:11.570670 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.570680 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.233) 0:00:04.481 ******** 2026-04-09 00:52:11.570690 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.570700 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.570710 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.570720 | orchestrator | 2026-04-09 00:52:11.570730 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.570740 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.269) 0:00:04.750 ******** 2026-04-09 00:52:11.570755 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.570775 | orchestrator | 2026-04-09 00:52:11.570798 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.570814 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.095) 0:00:04.846 ******** 2026-04-09 00:52:11.570829 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.570845 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.570861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.570875 | orchestrator | 2026-04-09 00:52:11.570891 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.570905 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.393) 0:00:05.239 ******** 2026-04-09 00:52:11.570921 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.570936 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.570952 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.570968 | orchestrator | 2026-04-09 00:52:11.570984 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.571000 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.248) 0:00:05.487 ******** 2026-04-09 00:52:11.571016 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571032 | orchestrator | 2026-04-09 00:52:11.571050 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.571066 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.096) 0:00:05.584 ******** 2026-04-09 00:52:11.571084 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571240 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.571264 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.571275 | orchestrator | 2026-04-09 00:52:11.571291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.571315 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.241) 0:00:05.825 ******** 2026-04-09 00:52:11.571332 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.571350 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.571365 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.571381 | orchestrator | 2026-04-09 00:52:11.571396 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.571411 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.274) 0:00:06.100 ******** 2026-04-09 00:52:11.571428 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571444 | orchestrator | 2026-04-09 00:52:11.571461 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.571478 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.115) 0:00:06.215 ******** 2026-04-09 00:52:11.571495 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571512 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.571525 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.571535 | orchestrator | 2026-04-09 00:52:11.571552 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.571563 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:00.378) 0:00:06.593 ******** 2026-04-09 00:52:11.571573 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.571583 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.571593 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.571603 | orchestrator | 2026-04-09 00:52:11.571631 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.571641 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:00.288) 0:00:06.881 ******** 2026-04-09 00:52:11.571651 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571661 | orchestrator | 2026-04-09 00:52:11.571671 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.571681 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:00.134) 0:00:07.016 ******** 2026-04-09 00:52:11.571691 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571702 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.571712 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.571722 | orchestrator | 2026-04-09 00:52:11.571732 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.571742 | orchestrator | Thursday 09 April 2026 00:51:47 +0000 (0:00:00.239) 0:00:07.256 ******** 2026-04-09 00:52:11.571752 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.571763 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.571773 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.571783 | orchestrator | 2026-04-09 00:52:11.571793 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.571804 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.340) 0:00:07.596 ******** 2026-04-09 00:52:11.571814 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571824 | orchestrator | 2026-04-09 00:52:11.571834 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.571845 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.103) 0:00:07.700 ******** 2026-04-09 00:52:11.571868 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.571879 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.571889 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.571899 | orchestrator | 2026-04-09 00:52:11.571909 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.571919 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:00.406) 0:00:08.107 ******** 2026-04-09 00:52:11.571929 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.571949 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.571959 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.571969 | orchestrator | 2026-04-09 00:52:11.571979 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.571989 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.309) 0:00:08.416 ******** 2026-04-09 00:52:11.571999 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572009 | orchestrator | 2026-04-09 00:52:11.572019 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.572029 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.109) 0:00:08.526 ******** 2026-04-09 00:52:11.572039 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572049 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.572060 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.572070 | orchestrator | 2026-04-09 00:52:11.572079 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.572090 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.257) 0:00:08.783 ******** 2026-04-09 00:52:11.572100 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.572110 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.572120 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.572130 | orchestrator | 2026-04-09 00:52:11.572140 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.572150 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.239) 0:00:09.023 ******** 2026-04-09 00:52:11.572384 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572404 | orchestrator | 2026-04-09 00:52:11.572415 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.572425 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.214) 0:00:09.237 ******** 2026-04-09 00:52:11.572435 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572445 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.572455 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.572476 | orchestrator | 2026-04-09 00:52:11.572485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.572493 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.241) 0:00:09.479 ******** 2026-04-09 00:52:11.572510 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.572519 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.572527 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.572535 | orchestrator | 2026-04-09 00:52:11.572543 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.572551 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.276) 0:00:09.756 ******** 2026-04-09 00:52:11.572560 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572568 | orchestrator | 2026-04-09 00:52:11.572577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.572585 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.101) 0:00:09.857 ******** 2026-04-09 00:52:11.572593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572602 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.572610 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.572618 | orchestrator | 2026-04-09 00:52:11.572627 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-09 00:52:11.572635 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.234) 0:00:10.092 ******** 2026-04-09 00:52:11.572643 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:11.572651 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:11.572659 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:11.572668 | orchestrator | 2026-04-09 00:52:11.572676 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-09 00:52:11.572684 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:00.351) 0:00:10.443 ******** 2026-04-09 00:52:11.572692 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572701 | orchestrator | 2026-04-09 00:52:11.572716 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-09 00:52:11.572733 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:00.117) 0:00:10.561 ******** 2026-04-09 00:52:11.572741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.572750 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.572758 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.572766 | orchestrator | 2026-04-09 00:52:11.572774 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-09 00:52:11.572783 | orchestrator | Thursday 09 April 2026 00:51:51 +0000 (0:00:00.251) 0:00:10.812 ******** 2026-04-09 00:52:11.572791 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:52:11.572799 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:52:11.572808 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:52:11.572816 | orchestrator | 2026-04-09 00:52:11.572824 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-09 00:52:11.572832 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:01.472) 0:00:12.285 ******** 2026-04-09 00:52:11.572841 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:52:11.572850 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:52:11.572858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-09 00:52:11.572866 | orchestrator | 2026-04-09 00:52:11.572874 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-09 00:52:11.572882 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:01.848) 0:00:14.134 ******** 2026-04-09 00:52:11.572891 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:52:11.572911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:52:11.572920 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-09 00:52:11.572928 | orchestrator | 2026-04-09 00:52:11.572937 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-09 00:52:11.572945 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:02.473) 0:00:16.607 ******** 2026-04-09 00:52:11.572953 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:52:11.572961 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:52:11.572970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-09 00:52:11.572978 | orchestrator | 2026-04-09 00:52:11.572986 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-09 00:52:11.572994 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:01.596) 0:00:18.203 ******** 2026-04-09 00:52:11.573003 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573011 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573019 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573027 | orchestrator | 2026-04-09 00:52:11.573035 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-09 00:52:11.573044 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.275) 0:00:18.479 ******** 2026-04-09 00:52:11.573052 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573060 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573069 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573077 | orchestrator | 2026-04-09 00:52:11.573085 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:52:11.573103 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.355) 0:00:18.835 ******** 2026-04-09 00:52:11.573112 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:52:11.573129 | orchestrator | 2026-04-09 00:52:11.573143 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-09 00:52:11.573151 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.764) 0:00:19.600 ******** 2026-04-09 00:52:11.573193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573256 | orchestrator | 2026-04-09 00:52:11.573264 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-09 00:52:11.573272 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:01.417) 0:00:21.017 ******** 2026-04-09 00:52:11.573281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573296 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573325 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573352 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573360 | orchestrator | 2026-04-09 00:52:11.573368 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-09 00:52:11.573376 | orchestrator | Thursday 09 April 2026 00:52:02 +0000 (0:00:00.757) 0:00:21.775 ******** 2026-04-09 00:52:11.573392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573429 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573463 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573471 | orchestrator | 2026-04-09 00:52:11.573479 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-09 00:52:11.573488 | orchestrator | Thursday 09 April 2026 00:52:03 +0000 (0:00:01.276) 0:00:23.051 ******** 2026-04-09 00:52:11.573505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-09 00:52:11.573550 | orchestrator | 2026-04-09 00:52:11.573558 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-09 00:52:11.573571 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:01.318) 0:00:24.370 ******** 2026-04-09 00:52:11.573580 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:52:11.573588 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:11.573596 | orchestrator | } 2026-04-09 00:52:11.573605 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:52:11.573613 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:11.573621 | orchestrator | } 2026-04-09 00:52:11.573630 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:52:11.573638 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:11.573646 | orchestrator | } 2026-04-09 00:52:11.573654 | orchestrator | 2026-04-09 00:52:11.573662 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:52:11.573670 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.390) 0:00:24.760 ******** 2026-04-09 00:52:11.573683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573692 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573721 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-09 00:52:11.573744 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573752 | orchestrator | 2026-04-09 00:52:11.573760 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:52:11.573768 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:01.443) 0:00:26.203 ******** 2026-04-09 00:52:11.573776 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:11.573790 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:11.573798 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:11.573806 | orchestrator | 2026-04-09 00:52:11.573818 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-09 00:52:11.573827 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.366) 0:00:26.570 ******** 2026-04-09 00:52:11.573835 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:52:11.573843 | orchestrator | 2026-04-09 00:52:11.573851 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-09 00:52:11.573859 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.660) 0:00:27.230 ******** 2026-04-09 00:52:11.573867 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:52:11.573875 | orchestrator | 2026-04-09 00:52:11.573883 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:52:11.573892 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-09 00:52:11.573901 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 00:52:11.573911 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-09 00:52:11.573919 | orchestrator | 2026-04-09 00:52:11.573927 | orchestrator | 2026-04-09 00:52:11.573935 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:52:11.573943 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:00.803) 0:00:28.034 ******** 2026-04-09 00:52:11.573952 | orchestrator | =============================================================================== 2026-04-09 00:52:11.573960 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2026-04-09 00:52:11.573968 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.85s 2026-04-09 00:52:11.573976 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.60s 2026-04-09 00:52:11.573984 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.49s 2026-04-09 00:52:11.573992 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.47s 2026-04-09 00:52:11.574000 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.44s 2026-04-09 00:52:11.574008 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.42s 2026-04-09 00:52:11.574074 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.32s 2026-04-09 00:52:11.574094 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.28s 2026-04-09 00:52:11.574113 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.80s 2026-04-09 00:52:11.574126 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-04-09 00:52:11.574139 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.76s 2026-04-09 00:52:11.574152 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-04-09 00:52:11.574182 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-04-09 00:52:11.574195 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-04-09 00:52:11.574209 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.41s 2026-04-09 00:52:11.574221 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.39s 2026-04-09 00:52:11.574241 | orchestrator | service-check-containers : horizon | Notify handlers to restart containers --- 0.39s 2026-04-09 00:52:11.574255 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.38s 2026-04-09 00:52:11.574278 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.37s 2026-04-09 00:52:11.574291 | orchestrator | 2026-04-09 00:52:11 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:11.574303 | orchestrator | 2026-04-09 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:14.602371 | orchestrator | 2026-04-09 00:52:14 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:14.604502 | orchestrator | 2026-04-09 00:52:14 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:14.605203 | orchestrator | 2026-04-09 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:17.645389 | orchestrator | 2026-04-09 00:52:17 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:17.648397 | orchestrator | 2026-04-09 00:52:17 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:17.648479 | orchestrator | 2026-04-09 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:20.684668 | orchestrator | 2026-04-09 00:52:20 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state STARTED 2026-04-09 00:52:20.688106 | orchestrator | 2026-04-09 00:52:20 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:20.688251 | orchestrator | 2026-04-09 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:23.722223 | orchestrator | 2026-04-09 00:52:23 | INFO  | Task 6e00fb5b-cd13-4f9a-8c62-a18480f6fd8d is in state SUCCESS 2026-04-09 00:52:23.723354 | orchestrator | 2026-04-09 00:52:23.723448 | orchestrator | 2026-04-09 00:52:23.723464 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:52:23.723476 | orchestrator | 2026-04-09 00:52:23.723487 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:52:23.723498 | orchestrator | Thursday 09 April 2026 00:51:40 +0000 (0:00:00.273) 0:00:00.274 ******** 2026-04-09 00:52:23.723508 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:23.723518 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:23.723528 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:23.723537 | orchestrator | 2026-04-09 00:52:23.723546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:52:23.723556 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.245) 0:00:00.519 ******** 2026-04-09 00:52:23.723565 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-09 00:52:23.723987 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-09 00:52:23.724029 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-09 00:52:23.724041 | orchestrator | 2026-04-09 00:52:23.724051 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-09 00:52:23.724061 | orchestrator | 2026-04-09 00:52:23.724071 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:52:23.724081 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:00.286) 0:00:00.805 ******** 2026-04-09 00:52:23.724092 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:52:23.724103 | orchestrator | 2026-04-09 00:52:23.724113 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-09 00:52:23.724122 | orchestrator | Thursday 09 April 2026 00:51:42 +0000 (0:00:00.610) 0:00:01.415 ******** 2026-04-09 00:52:23.724162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724418 | orchestrator | 2026-04-09 00:52:23.724428 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-09 00:52:23.724441 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:02.326) 0:00:03.742 ******** 2026-04-09 00:52:23.724451 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.724462 | orchestrator | 2026-04-09 00:52:23.724471 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-09 00:52:23.724481 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.096) 0:00:03.839 ******** 2026-04-09 00:52:23.724490 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.724499 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.724509 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.724518 | orchestrator | 2026-04-09 00:52:23.724528 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-09 00:52:23.724537 | orchestrator | Thursday 09 April 2026 00:51:44 +0000 (0:00:00.219) 0:00:04.059 ******** 2026-04-09 00:52:23.724547 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:52:23.724556 | orchestrator | 2026-04-09 00:52:23.724566 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:52:23.724584 | orchestrator | Thursday 09 April 2026 00:51:45 +0000 (0:00:00.785) 0:00:04.844 ******** 2026-04-09 00:52:23.724594 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:52:23.724603 | orchestrator | 2026-04-09 00:52:23.724613 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-09 00:52:23.724623 | orchestrator | Thursday 09 April 2026 00:51:46 +0000 (0:00:00.571) 0:00:05.416 ******** 2026-04-09 00:52:23.724635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.724685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.724787 | orchestrator | 2026-04-09 00:52:23.724797 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-09 00:52:23.724807 | orchestrator | Thursday 09 April 2026 00:51:48 +0000 (0:00:02.792) 0:00:08.208 ******** 2026-04-09 00:52:23.724817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.724834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.724844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.724859 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.724871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.724890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.724906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.724917 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.724927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.724943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.724953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.724963 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.724973 | orchestrator | 2026-04-09 00:52:23.724983 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-09 00:52:23.724992 | orchestrator | Thursday 09 April 2026 00:51:49 +0000 (0:00:00.483) 0:00:08.691 ******** 2026-04-09 00:52:23.725009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725046 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725152 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725172 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725181 | orchestrator | 2026-04-09 00:52:23.725191 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-09 00:52:23.725201 | orchestrator | Thursday 09 April 2026 00:51:50 +0000 (0:00:00.732) 0:00:09.424 ******** 2026-04-09 00:52:23.725218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725343 | orchestrator | 2026-04-09 00:52:23.725352 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-09 00:52:23.725362 | orchestrator | Thursday 09 April 2026 00:51:53 +0000 (0:00:03.148) 0:00:12.572 ******** 2026-04-09 00:52:23.725377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.725422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.725459 | orchestrator | 2026-04-09 00:52:23.725465 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-09 00:52:23.725471 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:04.857) 0:00:17.429 ******** 2026-04-09 00:52:23.725479 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:52:23.725488 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:52:23.725496 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:52:23.725504 | orchestrator | 2026-04-09 00:52:23.725512 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-09 00:52:23.725526 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:01.405) 0:00:18.835 ******** 2026-04-09 00:52:23.725535 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725544 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725552 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725561 | orchestrator | 2026-04-09 00:52:23.725569 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-09 00:52:23.725577 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.848) 0:00:19.684 ******** 2026-04-09 00:52:23.725586 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725595 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725605 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725613 | orchestrator | 2026-04-09 00:52:23.725623 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-09 00:52:23.725632 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.574) 0:00:20.258 ******** 2026-04-09 00:52:23.725641 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725651 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725660 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725669 | orchestrator | 2026-04-09 00:52:23.725677 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-09 00:52:23.725686 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:00.242) 0:00:20.501 ******** 2026-04-09 00:52:23.725702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725748 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725799 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.725823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.725839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.725849 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725858 | orchestrator | 2026-04-09 00:52:23.725867 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:52:23.725877 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:00.525) 0:00:21.026 ******** 2026-04-09 00:52:23.725885 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725891 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725896 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.725902 | orchestrator | 2026-04-09 00:52:23.725908 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-09 00:52:23.725913 | orchestrator | Thursday 09 April 2026 00:52:02 +0000 (0:00:00.261) 0:00:21.288 ******** 2026-04-09 00:52:23.725919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:52:23.725926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:52:23.725932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-09 00:52:23.725938 | orchestrator | 2026-04-09 00:52:23.725944 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-09 00:52:23.725955 | orchestrator | Thursday 09 April 2026 00:52:03 +0000 (0:00:01.893) 0:00:23.181 ******** 2026-04-09 00:52:23.725961 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:52:23.725966 | orchestrator | 2026-04-09 00:52:23.725972 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-09 00:52:23.725978 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:00.890) 0:00:24.072 ******** 2026-04-09 00:52:23.725983 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.725989 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.725994 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.726000 | orchestrator | 2026-04-09 00:52:23.726006 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-09 00:52:23.726011 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.521) 0:00:24.594 ******** 2026-04-09 00:52:23.726059 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:52:23.726065 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:52:23.726071 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:52:23.726077 | orchestrator | 2026-04-09 00:52:23.726083 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-09 00:52:23.726112 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:01.498) 0:00:26.092 ******** 2026-04-09 00:52:23.726119 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:52:23.726125 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:52:23.726186 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:52:23.726193 | orchestrator | 2026-04-09 00:52:23.726198 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-09 00:52:23.726204 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.346) 0:00:26.438 ******** 2026-04-09 00:52:23.726214 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:52:23.726220 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:52:23.726226 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-09 00:52:23.726232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:52:23.726238 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:52:23.726243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-09 00:52:23.726249 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:52:23.726256 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:52:23.726262 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-09 00:52:23.726267 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:52:23.726273 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:52:23.726278 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-09 00:52:23.726284 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:52:23.726290 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:52:23.726302 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-09 00:52:23.726308 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:52:23.726314 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:52:23.726320 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-09 00:52:23.726336 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:52:23.726341 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:52:23.726347 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-09 00:52:23.726353 | orchestrator | 2026-04-09 00:52:23.726359 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-09 00:52:23.726364 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:08.680) 0:00:35.119 ******** 2026-04-09 00:52:23.726370 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:52:23.726375 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:52:23.726381 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-09 00:52:23.726387 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:52:23.726392 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:52:23.726398 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-09 00:52:23.726404 | orchestrator | 2026-04-09 00:52:23.726410 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-09 00:52:23.726415 | orchestrator | Thursday 09 April 2026 00:52:18 +0000 (0:00:02.619) 0:00:37.739 ******** 2026-04-09 00:52:23.726422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.726432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.726443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-09 00:52:23.726454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-09 00:52:23.726505 | orchestrator | 2026-04-09 00:52:23.726511 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-09 00:52:23.726517 | orchestrator | Thursday 09 April 2026 00:52:20 +0000 (0:00:02.466) 0:00:40.205 ******** 2026-04-09 00:52:23.726523 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:52:23.726529 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:23.726535 | orchestrator | } 2026-04-09 00:52:23.726541 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:52:23.726547 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:23.726552 | orchestrator | } 2026-04-09 00:52:23.726558 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:52:23.726564 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:52:23.726570 | orchestrator | } 2026-04-09 00:52:23.726576 | orchestrator | 2026-04-09 00:52:23.726581 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:52:23.726587 | orchestrator | Thursday 09 April 2026 00:52:21 +0000 (0:00:00.286) 0:00:40.491 ******** 2026-04-09 00:52:23.726593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.726603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.726609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.726620 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.726631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.726638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.726649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.726658 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.726677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-09 00:52:23.726690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-09 00:52:23.726707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-09 00:52:23.726716 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.726725 | orchestrator | 2026-04-09 00:52:23.726734 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-09 00:52:23.726743 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:00.833) 0:00:41.325 ******** 2026-04-09 00:52:23.726757 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:52:23.726767 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:52:23.726776 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:52:23.726786 | orchestrator | 2026-04-09 00:52:23.726795 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-09 00:52:23.726805 | orchestrator | Thursday 09 April 2026 00:52:22 +0000 (0:00:00.249) 0:00:41.575 ******** 2026-04-09 00:52:23.726814 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:52:23.726824 | orchestrator | 2026-04-09 00:52:23.726833 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:52:23.726845 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-09 00:52:23.726860 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 00:52:23.726874 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-09 00:52:23.726884 | orchestrator | 2026-04-09 00:52:23.726892 | orchestrator | 2026-04-09 00:52:23.726901 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:52:23.726909 | orchestrator | Thursday 09 April 2026 00:52:23 +0000 (0:00:00.714) 0:00:42.290 ******** 2026-04-09 00:52:23.726918 | orchestrator | =============================================================================== 2026-04-09 00:52:23.726926 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.68s 2026-04-09 00:52:23.726935 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.86s 2026-04-09 00:52:23.726944 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-04-09 00:52:23.726953 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.79s 2026-04-09 00:52:23.726962 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2026-04-09 00:52:23.726972 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.47s 2026-04-09 00:52:23.726980 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.33s 2026-04-09 00:52:23.726989 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.89s 2026-04-09 00:52:23.726998 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.50s 2026-04-09 00:52:23.727004 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.41s 2026-04-09 00:52:23.727017 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 0.89s 2026-04-09 00:52:23.727026 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.85s 2026-04-09 00:52:23.727034 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.83s 2026-04-09 00:52:23.727047 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.78s 2026-04-09 00:52:23.727057 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 0.73s 2026-04-09 00:52:23.727066 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.71s 2026-04-09 00:52:23.727080 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.61s 2026-04-09 00:52:23.727088 | orchestrator | keystone : Get file list in custom domains folder ----------------------- 0.57s 2026-04-09 00:52:23.727097 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.57s 2026-04-09 00:52:23.727105 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.53s 2026-04-09 00:52:23.727113 | orchestrator | 2026-04-09 00:52:23 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:23.727122 | orchestrator | 2026-04-09 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:26.756629 | orchestrator | 2026-04-09 00:52:26 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:26.759565 | orchestrator | 2026-04-09 00:52:26 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:26.759654 | orchestrator | 2026-04-09 00:52:26 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:26.759666 | orchestrator | 2026-04-09 00:52:26 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:26.760243 | orchestrator | 2026-04-09 00:52:26 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:26.760276 | orchestrator | 2026-04-09 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:29.800739 | orchestrator | 2026-04-09 00:52:29 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:29.800918 | orchestrator | 2026-04-09 00:52:29 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:29.801516 | orchestrator | 2026-04-09 00:52:29 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:29.802201 | orchestrator | 2026-04-09 00:52:29 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:29.802758 | orchestrator | 2026-04-09 00:52:29 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:29.802805 | orchestrator | 2026-04-09 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:32.846535 | orchestrator | 2026-04-09 00:52:32 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:32.848741 | orchestrator | 2026-04-09 00:52:32 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:32.850835 | orchestrator | 2026-04-09 00:52:32 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:32.853102 | orchestrator | 2026-04-09 00:52:32 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:32.855275 | orchestrator | 2026-04-09 00:52:32 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:32.855470 | orchestrator | 2026-04-09 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:35.896209 | orchestrator | 2026-04-09 00:52:35 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:35.897543 | orchestrator | 2026-04-09 00:52:35 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:35.899379 | orchestrator | 2026-04-09 00:52:35 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:35.900743 | orchestrator | 2026-04-09 00:52:35 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:35.904468 | orchestrator | 2026-04-09 00:52:35 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:35.904630 | orchestrator | 2026-04-09 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:38.961352 | orchestrator | 2026-04-09 00:52:38 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:38.962504 | orchestrator | 2026-04-09 00:52:38 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:38.964316 | orchestrator | 2026-04-09 00:52:38 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:38.965668 | orchestrator | 2026-04-09 00:52:38 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:38.967152 | orchestrator | 2026-04-09 00:52:38 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:38.967223 | orchestrator | 2026-04-09 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:42.008196 | orchestrator | 2026-04-09 00:52:42 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:42.008770 | orchestrator | 2026-04-09 00:52:42 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:42.009526 | orchestrator | 2026-04-09 00:52:42 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:42.011180 | orchestrator | 2026-04-09 00:52:42 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:42.011687 | orchestrator | 2026-04-09 00:52:42 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:42.011709 | orchestrator | 2026-04-09 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:45.050847 | orchestrator | 2026-04-09 00:52:45 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:45.053238 | orchestrator | 2026-04-09 00:52:45 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:45.055390 | orchestrator | 2026-04-09 00:52:45 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:45.056793 | orchestrator | 2026-04-09 00:52:45 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:45.058401 | orchestrator | 2026-04-09 00:52:45 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:45.058534 | orchestrator | 2026-04-09 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:48.103933 | orchestrator | 2026-04-09 00:52:48 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:48.105659 | orchestrator | 2026-04-09 00:52:48 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:48.108079 | orchestrator | 2026-04-09 00:52:48 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:48.110075 | orchestrator | 2026-04-09 00:52:48 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:48.112456 | orchestrator | 2026-04-09 00:52:48 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:48.112554 | orchestrator | 2026-04-09 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:51.153708 | orchestrator | 2026-04-09 00:52:51 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:51.155468 | orchestrator | 2026-04-09 00:52:51 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:51.156175 | orchestrator | 2026-04-09 00:52:51 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:51.157545 | orchestrator | 2026-04-09 00:52:51 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:51.158636 | orchestrator | 2026-04-09 00:52:51 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:51.160198 | orchestrator | 2026-04-09 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:54.201338 | orchestrator | 2026-04-09 00:52:54 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:54.203068 | orchestrator | 2026-04-09 00:52:54 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:54.204795 | orchestrator | 2026-04-09 00:52:54 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:54.206194 | orchestrator | 2026-04-09 00:52:54 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:54.207664 | orchestrator | 2026-04-09 00:52:54 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:54.207707 | orchestrator | 2026-04-09 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:52:57.245234 | orchestrator | 2026-04-09 00:52:57 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:52:57.247262 | orchestrator | 2026-04-09 00:52:57 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:52:57.249335 | orchestrator | 2026-04-09 00:52:57 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:52:57.250882 | orchestrator | 2026-04-09 00:52:57 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:52:57.252235 | orchestrator | 2026-04-09 00:52:57 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:52:57.252278 | orchestrator | 2026-04-09 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:00.294423 | orchestrator | 2026-04-09 00:53:00 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:00.296164 | orchestrator | 2026-04-09 00:53:00 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:00.297577 | orchestrator | 2026-04-09 00:53:00 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:53:00.299160 | orchestrator | 2026-04-09 00:53:00 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:00.300674 | orchestrator | 2026-04-09 00:53:00 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:00.300715 | orchestrator | 2026-04-09 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:03.338773 | orchestrator | 2026-04-09 00:53:03 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:03.340689 | orchestrator | 2026-04-09 00:53:03 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:03.342756 | orchestrator | 2026-04-09 00:53:03 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:53:03.344495 | orchestrator | 2026-04-09 00:53:03 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:03.346589 | orchestrator | 2026-04-09 00:53:03 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:03.346624 | orchestrator | 2026-04-09 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:06.428294 | orchestrator | 2026-04-09 00:53:06 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:06.428369 | orchestrator | 2026-04-09 00:53:06 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:06.428910 | orchestrator | 2026-04-09 00:53:06 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:53:06.429715 | orchestrator | 2026-04-09 00:53:06 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:06.430364 | orchestrator | 2026-04-09 00:53:06 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:06.430399 | orchestrator | 2026-04-09 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:09.479833 | orchestrator | 2026-04-09 00:53:09 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:09.481512 | orchestrator | 2026-04-09 00:53:09 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:09.482949 | orchestrator | 2026-04-09 00:53:09 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state STARTED 2026-04-09 00:53:09.484765 | orchestrator | 2026-04-09 00:53:09 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:09.486285 | orchestrator | 2026-04-09 00:53:09 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:09.486321 | orchestrator | 2026-04-09 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:12.531518 | orchestrator | 2026-04-09 00:53:12 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:12.534884 | orchestrator | 2026-04-09 00:53:12 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:12.535830 | orchestrator | 2026-04-09 00:53:12 | INFO  | Task 8ef83cf8-d993-46e2-a252-b721bd90068f is in state SUCCESS 2026-04-09 00:53:12.537463 | orchestrator | 2026-04-09 00:53:12 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:12.538951 | orchestrator | 2026-04-09 00:53:12 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:12.538994 | orchestrator | 2026-04-09 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:15.584355 | orchestrator | 2026-04-09 00:53:15 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:15.588398 | orchestrator | 2026-04-09 00:53:15 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:15.590426 | orchestrator | 2026-04-09 00:53:15 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:15.594975 | orchestrator | 2026-04-09 00:53:15 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:15.597114 | orchestrator | 2026-04-09 00:53:15 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:15.597547 | orchestrator | 2026-04-09 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:18.643623 | orchestrator | 2026-04-09 00:53:18 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:18.644858 | orchestrator | 2026-04-09 00:53:18 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:18.648821 | orchestrator | 2026-04-09 00:53:18 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:18.650455 | orchestrator | 2026-04-09 00:53:18 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:18.652349 | orchestrator | 2026-04-09 00:53:18 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:18.652409 | orchestrator | 2026-04-09 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:21.698325 | orchestrator | 2026-04-09 00:53:21 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:21.700318 | orchestrator | 2026-04-09 00:53:21 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state STARTED 2026-04-09 00:53:21.701806 | orchestrator | 2026-04-09 00:53:21 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state STARTED 2026-04-09 00:53:21.705573 | orchestrator | 2026-04-09 00:53:21 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:21.707672 | orchestrator | 2026-04-09 00:53:21 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:21.707783 | orchestrator | 2026-04-09 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:24.763406 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:24.763502 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state STARTED 2026-04-09 00:53:24.764236 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task 91e4678d-4426-44b8-b340-45de5ae78001 is in state SUCCESS 2026-04-09 00:53:24.765423 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task 8b7a4fbd-8699-4615-b53a-9dbae5992773 is in state SUCCESS 2026-04-09 00:53:24.766001 | orchestrator | 2026-04-09 00:53:24.766095 | orchestrator | 2026-04-09 00:53:24.766130 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-09 00:53:24.766141 | orchestrator | 2026-04-09 00:53:24.766148 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-09 00:53:24.766156 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.104) 0:00:00.104 ******** 2026-04-09 00:53:24.766163 | orchestrator | changed: [localhost] 2026-04-09 00:53:24.766171 | orchestrator | 2026-04-09 00:53:24.766179 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-09 00:53:24.766186 | orchestrator | Thursday 09 April 2026 00:52:29 +0000 (0:00:00.978) 0:00:01.082 ******** 2026-04-09 00:53:24.766193 | orchestrator | changed: [localhost] 2026-04-09 00:53:24.766200 | orchestrator | 2026-04-09 00:53:24.766207 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-09 00:53:24.766214 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:37.634) 0:00:38.716 ******** 2026-04-09 00:53:24.766222 | orchestrator | changed: [localhost] 2026-04-09 00:53:24.766229 | orchestrator | 2026-04-09 00:53:24.766236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:53:24.766243 | orchestrator | 2026-04-09 00:53:24.766251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:53:24.766258 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:04.709) 0:00:43.425 ******** 2026-04-09 00:53:24.766265 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:24.766272 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:24.766279 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:24.766286 | orchestrator | 2026-04-09 00:53:24.766333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:53:24.766343 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:00.306) 0:00:43.732 ******** 2026-04-09 00:53:24.766397 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-09 00:53:24.766405 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-09 00:53:24.766413 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-09 00:53:24.766420 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-09 00:53:24.766427 | orchestrator | 2026-04-09 00:53:24.766434 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-09 00:53:24.766441 | orchestrator | skipping: no hosts matched 2026-04-09 00:53:24.766449 | orchestrator | 2026-04-09 00:53:24.766455 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:53:24.766462 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.766471 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.766480 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.766501 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.766510 | orchestrator | 2026-04-09 00:53:24.766518 | orchestrator | 2026-04-09 00:53:24.766525 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:53:24.766531 | orchestrator | Thursday 09 April 2026 00:53:12 +0000 (0:00:00.400) 0:00:44.132 ******** 2026-04-09 00:53:24.766538 | orchestrator | =============================================================================== 2026-04-09 00:53:24.766545 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 37.63s 2026-04-09 00:53:24.766552 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.71s 2026-04-09 00:53:24.766558 | orchestrator | Ensure the destination directory exists --------------------------------- 0.98s 2026-04-09 00:53:24.766565 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-09 00:53:24.766571 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-09 00:53:24.766577 | orchestrator | 2026-04-09 00:53:24.766583 | orchestrator | 2026-04-09 00:53:24.766589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:53:24.766595 | orchestrator | 2026-04-09 00:53:24.766601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:53:24.766608 | orchestrator | Thursday 09 April 2026 00:52:27 +0000 (0:00:00.423) 0:00:00.423 ******** 2026-04-09 00:53:24.766614 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:24.766620 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:24.766626 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:24.766632 | orchestrator | 2026-04-09 00:53:24.766638 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:53:24.766645 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.332) 0:00:00.756 ******** 2026-04-09 00:53:24.766728 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-09 00:53:24.766897 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-09 00:53:24.766906 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-09 00:53:24.766912 | orchestrator | 2026-04-09 00:53:24.766918 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-09 00:53:24.766925 | orchestrator | 2026-04-09 00:53:24.766931 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-09 00:53:24.766938 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.268) 0:00:01.025 ******** 2026-04-09 00:53:24.766944 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:24.766951 | orchestrator | 2026-04-09 00:53:24.767027 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-09 00:53:24.767122 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.536) 0:00:01.562 ******** 2026-04-09 00:53:24.767146 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-09 00:53:24.767154 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-09 00:53:24.767158 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-09 00:53:24.767162 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-09 00:53:24.767166 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-09 00:53:24.767173 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:53:24.767180 | orchestrator | 2026-04-09 00:53:24.767184 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:53:24.767188 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767195 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767204 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767208 | orchestrator | 2026-04-09 00:53:24.767212 | orchestrator | 2026-04-09 00:53:24.767216 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:53:24.767220 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:53.602) 0:00:55.165 ******** 2026-04-09 00:53:24.767224 | orchestrator | =============================================================================== 2026-04-09 00:53:24.767228 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.60s 2026-04-09 00:53:24.767232 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.54s 2026-04-09 00:53:24.767236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-09 00:53:24.767240 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-09 00:53:24.767244 | orchestrator | 2026-04-09 00:53:24.767248 | orchestrator | 2026-04-09 00:53:24.767252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:53:24.767256 | orchestrator | 2026-04-09 00:53:24.767269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:53:24.767273 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-04-09 00:53:24.767277 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:53:24.767281 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:53:24.767285 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:53:24.767289 | orchestrator | 2026-04-09 00:53:24.767293 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:53:24.767297 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:00.392) 0:00:00.717 ******** 2026-04-09 00:53:24.767302 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-09 00:53:24.767306 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-09 00:53:24.767310 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-09 00:53:24.767314 | orchestrator | 2026-04-09 00:53:24.767356 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-09 00:53:24.767372 | orchestrator | 2026-04-09 00:53:24.767379 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-09 00:53:24.767385 | orchestrator | Thursday 09 April 2026 00:52:27 +0000 (0:00:00.526) 0:00:01.244 ******** 2026-04-09 00:53:24.767392 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:53:24.767398 | orchestrator | 2026-04-09 00:53:24.767405 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-09 00:53:24.767409 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.849) 0:00:02.093 ******** 2026-04-09 00:53:24.767413 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-09 00:53:24.767440 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-09 00:53:24.767445 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-09 00:53:24.767449 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-09 00:53:24.767453 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-09 00:53:24.767466 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:53:24.767470 | orchestrator | 2026-04-09 00:53:24.767474 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:53:24.767478 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767482 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767487 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:53:24.767491 | orchestrator | 2026-04-09 00:53:24.767495 | orchestrator | 2026-04-09 00:53:24.767499 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:53:24.767503 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:53.754) 0:00:55.847 ******** 2026-04-09 00:53:24.767540 | orchestrator | =============================================================================== 2026-04-09 00:53:24.767545 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.75s 2026-04-09 00:53:24.767549 | orchestrator | designate : include_tasks ----------------------------------------------- 0.85s 2026-04-09 00:53:24.767554 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-09 00:53:24.767559 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-04-09 00:53:24.767563 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:24.768358 | orchestrator | 2026-04-09 00:53:24 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:24.768388 | orchestrator | 2026-04-09 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:27.818093 | orchestrator | 2026-04-09 00:53:27 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:27.820250 | orchestrator | 2026-04-09 00:53:27 | INFO  | Task ae4ea0c3-fbbc-4183-a415-49999b2fc2e0 is in state SUCCESS 2026-04-09 00:53:27.821717 | orchestrator | 2026-04-09 00:53:27 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:27.822589 | orchestrator | 2026-04-09 00:53:27 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:27.822636 | orchestrator | 2026-04-09 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:30.864296 | orchestrator | 2026-04-09 00:53:30 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:30.865176 | orchestrator | 2026-04-09 00:53:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:30.867613 | orchestrator | 2026-04-09 00:53:30 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:30.870846 | orchestrator | 2026-04-09 00:53:30 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:30.870906 | orchestrator | 2026-04-09 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:33.920845 | orchestrator | 2026-04-09 00:53:33 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:33.923323 | orchestrator | 2026-04-09 00:53:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:33.924373 | orchestrator | 2026-04-09 00:53:33 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:33.926179 | orchestrator | 2026-04-09 00:53:33 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:33.926218 | orchestrator | 2026-04-09 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:36.976249 | orchestrator | 2026-04-09 00:53:36 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:36.978165 | orchestrator | 2026-04-09 00:53:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:36.979559 | orchestrator | 2026-04-09 00:53:36 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:36.981347 | orchestrator | 2026-04-09 00:53:36 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:36.981398 | orchestrator | 2026-04-09 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:40.022748 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:40.027058 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:40.029629 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:40.031690 | orchestrator | 2026-04-09 00:53:40 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:40.031783 | orchestrator | 2026-04-09 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:43.093187 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:43.095602 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:43.098297 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:43.100876 | orchestrator | 2026-04-09 00:53:43 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:43.100933 | orchestrator | 2026-04-09 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:46.141055 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:46.145831 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:46.148246 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:46.151567 | orchestrator | 2026-04-09 00:53:46 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:46.152469 | orchestrator | 2026-04-09 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:49.201329 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:49.203203 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:49.207416 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:49.208725 | orchestrator | 2026-04-09 00:53:49 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:49.208763 | orchestrator | 2026-04-09 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:52.257109 | orchestrator | 2026-04-09 00:53:52 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:52.258677 | orchestrator | 2026-04-09 00:53:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:52.260175 | orchestrator | 2026-04-09 00:53:52 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:52.263131 | orchestrator | 2026-04-09 00:53:52 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:52.263524 | orchestrator | 2026-04-09 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:55.312309 | orchestrator | 2026-04-09 00:53:55 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:55.314278 | orchestrator | 2026-04-09 00:53:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:55.316344 | orchestrator | 2026-04-09 00:53:55 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:55.317976 | orchestrator | 2026-04-09 00:53:55 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:55.318078 | orchestrator | 2026-04-09 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:53:58.366584 | orchestrator | 2026-04-09 00:53:58 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:53:58.367636 | orchestrator | 2026-04-09 00:53:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:53:58.369348 | orchestrator | 2026-04-09 00:53:58 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:53:58.371733 | orchestrator | 2026-04-09 00:53:58 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:53:58.371772 | orchestrator | 2026-04-09 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:01.410154 | orchestrator | 2026-04-09 00:54:01 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:01.411540 | orchestrator | 2026-04-09 00:54:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:01.413060 | orchestrator | 2026-04-09 00:54:01 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:01.414389 | orchestrator | 2026-04-09 00:54:01 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:54:01.414434 | orchestrator | 2026-04-09 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:04.460439 | orchestrator | 2026-04-09 00:54:04 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:04.461084 | orchestrator | 2026-04-09 00:54:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:04.462721 | orchestrator | 2026-04-09 00:54:04 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:04.464105 | orchestrator | 2026-04-09 00:54:04 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:54:04.464143 | orchestrator | 2026-04-09 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:07.502898 | orchestrator | 2026-04-09 00:54:07 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:07.503185 | orchestrator | 2026-04-09 00:54:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:07.504237 | orchestrator | 2026-04-09 00:54:07 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:07.505095 | orchestrator | 2026-04-09 00:54:07 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state STARTED 2026-04-09 00:54:07.505133 | orchestrator | 2026-04-09 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:10.549501 | orchestrator | 2026-04-09 00:54:10 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:10.550868 | orchestrator | 2026-04-09 00:54:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:10.552516 | orchestrator | 2026-04-09 00:54:10 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:10.553725 | orchestrator | 2026-04-09 00:54:10 | INFO  | Task 1c712154-ee49-4a97-8f84-77033be34fc7 is in state SUCCESS 2026-04-09 00:54:10.554104 | orchestrator | 2026-04-09 00:54:10.554131 | orchestrator | 2026-04-09 00:54:10.554137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:54:10.554149 | orchestrator | 2026-04-09 00:54:10.554153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:54:10.554159 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:00.323) 0:00:00.324 ******** 2026-04-09 00:54:10.554164 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:10.554186 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:10.554195 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:10.554199 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:10.554203 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:10.554207 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:10.554211 | orchestrator | 2026-04-09 00:54:10.554216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:54:10.554220 | orchestrator | Thursday 09 April 2026 00:52:27 +0000 (0:00:00.895) 0:00:01.219 ******** 2026-04-09 00:54:10.554224 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-09 00:54:10.554229 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-09 00:54:10.554233 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-09 00:54:10.554237 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-09 00:54:10.554242 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-09 00:54:10.554246 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-09 00:54:10.554250 | orchestrator | 2026-04-09 00:54:10.554254 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-09 00:54:10.554258 | orchestrator | 2026-04-09 00:54:10.554262 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-09 00:54:10.554266 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:00.665) 0:00:01.885 ******** 2026-04-09 00:54:10.554271 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:10.554290 | orchestrator | 2026-04-09 00:54:10.554294 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-09 00:54:10.554298 | orchestrator | Thursday 09 April 2026 00:52:29 +0000 (0:00:00.951) 0:00:02.836 ******** 2026-04-09 00:54:10.554302 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:10.554306 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:10.554310 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:10.554314 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:10.554317 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:10.554321 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:10.554326 | orchestrator | 2026-04-09 00:54:10.554330 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-09 00:54:10.554334 | orchestrator | Thursday 09 April 2026 00:52:31 +0000 (0:00:01.906) 0:00:04.743 ******** 2026-04-09 00:54:10.554338 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:10.554342 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:10.554346 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:10.554350 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:10.554354 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:10.554357 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:10.554361 | orchestrator | 2026-04-09 00:54:10.554365 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-09 00:54:10.554369 | orchestrator | Thursday 09 April 2026 00:52:32 +0000 (0:00:01.076) 0:00:05.819 ******** 2026-04-09 00:54:10.554373 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:10.554378 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:10.554382 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:10.554386 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:10.554390 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:10.554394 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:10.554398 | orchestrator | 2026-04-09 00:54:10.554402 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-09 00:54:10.554406 | orchestrator | Thursday 09 April 2026 00:52:32 +0000 (0:00:00.422) 0:00:06.242 ******** 2026-04-09 00:54:10.554410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:10.554414 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:10.554418 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:10.554422 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:10.554426 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:10.554430 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:10.554434 | orchestrator | 2026-04-09 00:54:10.554438 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-09 00:54:10.554442 | orchestrator | Thursday 09 April 2026 00:52:33 +0000 (0:00:00.579) 0:00:06.821 ******** 2026-04-09 00:54:10.554446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-09 00:54:10.554451 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-09 00:54:10.554455 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-09 00:54:10.554459 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-09 00:54:10.554463 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-09 00:54:10.554468 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:54:10.554474 | orchestrator | 2026-04-09 00:54:10.554478 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:54:10.554495 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554500 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554507 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554511 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554515 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554519 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:54:10.554523 | orchestrator | 2026-04-09 00:54:10.554527 | orchestrator | 2026-04-09 00:54:10.554531 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:54:10.554535 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:53.320) 0:01:00.142 ******** 2026-04-09 00:54:10.554539 | orchestrator | =============================================================================== 2026-04-09 00:54:10.554543 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 53.32s 2026-04-09 00:54:10.554547 | orchestrator | neutron : Get container facts ------------------------------------------- 1.91s 2026-04-09 00:54:10.554551 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.08s 2026-04-09 00:54:10.554555 | orchestrator | neutron : include_tasks ------------------------------------------------- 0.95s 2026-04-09 00:54:10.554560 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2026-04-09 00:54:10.554567 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-04-09 00:54:10.554573 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.58s 2026-04-09 00:54:10.554582 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.42s 2026-04-09 00:54:10.554590 | orchestrator | 2026-04-09 00:54:10.554597 | orchestrator | 2026-04-09 00:54:10.554603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:54:10.554609 | orchestrator | 2026-04-09 00:54:10.554616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:54:10.554622 | orchestrator | Thursday 09 April 2026 00:53:15 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-04-09 00:54:10.554628 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:10.554633 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:10.554639 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:10.554645 | orchestrator | 2026-04-09 00:54:10.554651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:54:10.554657 | orchestrator | Thursday 09 April 2026 00:53:15 +0000 (0:00:00.248) 0:00:00.514 ******** 2026-04-09 00:54:10.554663 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-09 00:54:10.554669 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-09 00:54:10.554705 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-09 00:54:10.554721 | orchestrator | 2026-04-09 00:54:10.554726 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-09 00:54:10.554731 | orchestrator | 2026-04-09 00:54:10.554735 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-09 00:54:10.554740 | orchestrator | Thursday 09 April 2026 00:53:15 +0000 (0:00:00.288) 0:00:00.802 ******** 2026-04-09 00:54:10.554744 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:10.554754 | orchestrator | 2026-04-09 00:54:10.554758 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-09 00:54:10.554763 | orchestrator | Thursday 09 April 2026 00:53:16 +0000 (0:00:00.554) 0:00:01.356 ******** 2026-04-09 00:54:10.554768 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-09 00:54:10.554772 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-09 00:54:10.554777 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-09 00:54:10.554782 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-09 00:54:10.554786 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-09 00:54:10.554791 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:54:10.554796 | orchestrator | 2026-04-09 00:54:10.554800 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:54:10.554810 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-09 00:54:10.554815 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:54:10.554820 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:54:10.554831 | orchestrator | 2026-04-09 00:54:10.554835 | orchestrator | 2026-04-09 00:54:10.554840 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:54:10.554845 | orchestrator | Thursday 09 April 2026 00:54:09 +0000 (0:00:53.624) 0:00:54.980 ******** 2026-04-09 00:54:10.554850 | orchestrator | =============================================================================== 2026-04-09 00:54:10.554854 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.62s 2026-04-09 00:54:10.554859 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2026-04-09 00:54:10.554863 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-09 00:54:10.554868 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-04-09 00:54:10.554872 | orchestrator | 2026-04-09 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:13.601445 | orchestrator | 2026-04-09 00:54:13 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:13.603661 | orchestrator | 2026-04-09 00:54:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:13.605958 | orchestrator | 2026-04-09 00:54:13 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:13.606149 | orchestrator | 2026-04-09 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:16.647890 | orchestrator | 2026-04-09 00:54:16 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:16.649314 | orchestrator | 2026-04-09 00:54:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:16.651216 | orchestrator | 2026-04-09 00:54:16 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:16.651261 | orchestrator | 2026-04-09 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:19.695333 | orchestrator | 2026-04-09 00:54:19 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state STARTED 2026-04-09 00:54:19.696877 | orchestrator | 2026-04-09 00:54:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:19.698371 | orchestrator | 2026-04-09 00:54:19 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:19.698418 | orchestrator | 2026-04-09 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:22.743209 | orchestrator | 2026-04-09 00:54:22 | INFO  | Task dbafef46-f5a8-43a9-8505-1cf9e14902c0 is in state SUCCESS 2026-04-09 00:54:22.745344 | orchestrator | 2026-04-09 00:54:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:22.746362 | orchestrator | 2026-04-09 00:54:22 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:22.746438 | orchestrator | 2026-04-09 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:25.793275 | orchestrator | 2026-04-09 00:54:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:25.796274 | orchestrator | 2026-04-09 00:54:25 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:25.796361 | orchestrator | 2026-04-09 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:28.836345 | orchestrator | 2026-04-09 00:54:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:28.837158 | orchestrator | 2026-04-09 00:54:28 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state STARTED 2026-04-09 00:54:28.837202 | orchestrator | 2026-04-09 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:31.893812 | orchestrator | 2026-04-09 00:54:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:31.902159 | orchestrator | 2026-04-09 00:54:31 | INFO  | Task 36564376-2b61-4713-8392-e095616cf9fc is in state SUCCESS 2026-04-09 00:54:31.903505 | orchestrator | 2026-04-09 00:54:31.903555 | orchestrator | 2026-04-09 00:54:31.903564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:54:31.903572 | orchestrator | 2026-04-09 00:54:31.903579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:54:31.903586 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-09 00:54:31.903592 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.903599 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.903605 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.903612 | orchestrator | 2026-04-09 00:54:31.903619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:54:31.903625 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.268) 0:00:00.575 ******** 2026-04-09 00:54:31.903632 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-09 00:54:31.903653 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-09 00:54:31.903660 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-09 00:54:31.903668 | orchestrator | 2026-04-09 00:54:31.903677 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-09 00:54:31.903684 | orchestrator | 2026-04-09 00:54:31.903690 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-09 00:54:31.903696 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:00.279) 0:00:00.854 ******** 2026-04-09 00:54:31.903701 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.903708 | orchestrator | 2026-04-09 00:54:31.903714 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-09 00:54:31.903733 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:00.606) 0:00:01.461 ******** 2026-04-09 00:54:31.903740 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-09 00:54:31.903746 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-09 00:54:31.903751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-09 00:54:31.903757 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-09 00:54:31.903763 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-09 00:54:31.903822 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:54:31.903832 | orchestrator | 2026-04-09 00:54:31.903839 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:54:31.903846 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-09 00:54:31.903853 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:54:31.903860 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:54:31.903866 | orchestrator | 2026-04-09 00:54:31.903872 | orchestrator | 2026-04-09 00:54:31.903900 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:54:31.903907 | orchestrator | Thursday 09 April 2026 00:54:20 +0000 (0:00:53.551) 0:00:55.013 ******** 2026-04-09 00:54:31.903912 | orchestrator | =============================================================================== 2026-04-09 00:54:31.903916 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.55s 2026-04-09 00:54:31.903940 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.61s 2026-04-09 00:54:31.903945 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-04-09 00:54:31.903949 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-09 00:54:31.903953 | orchestrator | 2026-04-09 00:54:31.903970 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:54:31.903977 | orchestrator | 2.16.14 2026-04-09 00:54:31.904005 | orchestrator | 2026-04-09 00:54:31.904011 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-09 00:54:31.904017 | orchestrator | 2026-04-09 00:54:31.904036 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:54:31.904040 | orchestrator | Thursday 09 April 2026 00:43:49 +0000 (0:00:00.726) 0:00:00.726 ******** 2026-04-09 00:54:31.904044 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.904049 | orchestrator | 2026-04-09 00:54:31.904053 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:54:31.904057 | orchestrator | Thursday 09 April 2026 00:43:51 +0000 (0:00:01.209) 0:00:01.935 ******** 2026-04-09 00:54:31.904061 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904065 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904069 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904073 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904086 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904097 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904101 | orchestrator | 2026-04-09 00:54:31.904106 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:54:31.904111 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:01.823) 0:00:03.759 ******** 2026-04-09 00:54:31.904115 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904120 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904125 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904129 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904134 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904139 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904143 | orchestrator | 2026-04-09 00:54:31.904148 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:54:31.904152 | orchestrator | Thursday 09 April 2026 00:43:53 +0000 (0:00:00.748) 0:00:04.507 ******** 2026-04-09 00:54:31.904157 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904165 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904170 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904174 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904178 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904183 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904187 | orchestrator | 2026-04-09 00:54:31.904192 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:54:31.904196 | orchestrator | Thursday 09 April 2026 00:43:54 +0000 (0:00:00.824) 0:00:05.332 ******** 2026-04-09 00:54:31.904201 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904206 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904210 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904215 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904219 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904224 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904228 | orchestrator | 2026-04-09 00:54:31.904233 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:54:31.904237 | orchestrator | Thursday 09 April 2026 00:43:55 +0000 (0:00:00.835) 0:00:06.167 ******** 2026-04-09 00:54:31.904242 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904246 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904251 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904256 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904260 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904265 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904270 | orchestrator | 2026-04-09 00:54:31.904274 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:54:31.904278 | orchestrator | Thursday 09 April 2026 00:43:56 +0000 (0:00:00.719) 0:00:06.887 ******** 2026-04-09 00:54:31.904283 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904287 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904292 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904296 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904301 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904305 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904309 | orchestrator | 2026-04-09 00:54:31.904314 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:54:31.904318 | orchestrator | Thursday 09 April 2026 00:43:57 +0000 (0:00:01.013) 0:00:07.900 ******** 2026-04-09 00:54:31.904323 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904328 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.904333 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.904337 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.904365 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.904370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.904375 | orchestrator | 2026-04-09 00:54:31.904380 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:54:31.904384 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.913) 0:00:08.813 ******** 2026-04-09 00:54:31.904389 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904397 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904401 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904406 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904410 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904415 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904419 | orchestrator | 2026-04-09 00:54:31.904424 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:54:31.904431 | orchestrator | Thursday 09 April 2026 00:43:58 +0000 (0:00:00.716) 0:00:09.530 ******** 2026-04-09 00:54:31.904438 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:54:31.904444 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.904451 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.904458 | orchestrator | 2026-04-09 00:54:31.904463 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:54:31.904468 | orchestrator | Thursday 09 April 2026 00:44:00 +0000 (0:00:01.511) 0:00:11.041 ******** 2026-04-09 00:54:31.904473 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904478 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904482 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904486 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904490 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904494 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904498 | orchestrator | 2026-04-09 00:54:31.904502 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:54:31.904506 | orchestrator | Thursday 09 April 2026 00:44:01 +0000 (0:00:01.645) 0:00:12.687 ******** 2026-04-09 00:54:31.904510 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:54:31.904514 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.904517 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.904521 | orchestrator | 2026-04-09 00:54:31.904525 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:54:31.904529 | orchestrator | Thursday 09 April 2026 00:44:04 +0000 (0:00:02.532) 0:00:15.219 ******** 2026-04-09 00:54:31.904544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:54:31.904549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:54:31.904557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:54:31.904562 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904566 | orchestrator | 2026-04-09 00:54:31.904570 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:54:31.904574 | orchestrator | Thursday 09 April 2026 00:44:05 +0000 (0:00:00.842) 0:00:16.061 ******** 2026-04-09 00:54:31.904579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904647 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904651 | orchestrator | 2026-04-09 00:54:31.904655 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:54:31.904659 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:01.307) 0:00:17.369 ******** 2026-04-09 00:54:31.904667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904691 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904695 | orchestrator | 2026-04-09 00:54:31.904699 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:54:31.904703 | orchestrator | Thursday 09 April 2026 00:44:06 +0000 (0:00:00.230) 0:00:17.600 ******** 2026-04-09 00:54:31.904708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:44:02.494448', 'end': '2026-04-09 00:44:02.626620', 'delta': '0:00:00.132172', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:44:03.054173', 'end': '2026-04-09 00:44:03.155709', 'delta': '0:00:00.101536', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:44:04.145055', 'end': '2026-04-09 00:44:04.257131', 'delta': '0:00:00.112076', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.904741 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904755 | orchestrator | 2026-04-09 00:54:31.904763 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:54:31.904767 | orchestrator | Thursday 09 April 2026 00:44:07 +0000 (0:00:00.238) 0:00:17.839 ******** 2026-04-09 00:54:31.904771 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.904775 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.904799 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.904803 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.904807 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.904811 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.904815 | orchestrator | 2026-04-09 00:54:31.904819 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:54:31.904823 | orchestrator | Thursday 09 April 2026 00:44:10 +0000 (0:00:03.128) 0:00:20.967 ******** 2026-04-09 00:54:31.904854 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.904858 | orchestrator | 2026-04-09 00:54:31.904862 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:54:31.904867 | orchestrator | Thursday 09 April 2026 00:44:11 +0000 (0:00:00.783) 0:00:21.750 ******** 2026-04-09 00:54:31.904871 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904875 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.904879 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.904883 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.904887 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.904891 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.904895 | orchestrator | 2026-04-09 00:54:31.904899 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:54:31.904903 | orchestrator | Thursday 09 April 2026 00:44:12 +0000 (0:00:01.041) 0:00:22.791 ******** 2026-04-09 00:54:31.904907 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.904911 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.904915 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904919 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.904923 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.904927 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.904931 | orchestrator | 2026-04-09 00:54:31.904935 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:54:31.904939 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:01.255) 0:00:24.047 ******** 2026-04-09 00:54:31.904943 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.904947 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.904951 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.904955 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.904984 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.904989 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.904993 | orchestrator | 2026-04-09 00:54:31.904997 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:54:31.905013 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.443) 0:00:24.490 ******** 2026-04-09 00:54:31.905018 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905022 | orchestrator | 2026-04-09 00:54:31.905026 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:54:31.905030 | orchestrator | Thursday 09 April 2026 00:44:13 +0000 (0:00:00.233) 0:00:24.723 ******** 2026-04-09 00:54:31.905034 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905038 | orchestrator | 2026-04-09 00:54:31.905042 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:54:31.905046 | orchestrator | Thursday 09 April 2026 00:44:14 +0000 (0:00:00.221) 0:00:24.945 ******** 2026-04-09 00:54:31.905050 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905054 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905058 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905062 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905066 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905073 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905077 | orchestrator | 2026-04-09 00:54:31.905081 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:54:31.905085 | orchestrator | Thursday 09 April 2026 00:44:15 +0000 (0:00:00.893) 0:00:25.839 ******** 2026-04-09 00:54:31.905089 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905093 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905097 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905101 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905105 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905109 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905113 | orchestrator | 2026-04-09 00:54:31.905117 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:54:31.905121 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:01.075) 0:00:26.915 ******** 2026-04-09 00:54:31.905125 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905129 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905133 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905137 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905144 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905148 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905153 | orchestrator | 2026-04-09 00:54:31.905160 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:54:31.905167 | orchestrator | Thursday 09 April 2026 00:44:16 +0000 (0:00:00.712) 0:00:27.627 ******** 2026-04-09 00:54:31.905174 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905184 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905189 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905196 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905202 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905208 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905214 | orchestrator | 2026-04-09 00:54:31.905220 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:54:31.905226 | orchestrator | Thursday 09 April 2026 00:44:17 +0000 (0:00:00.681) 0:00:28.309 ******** 2026-04-09 00:54:31.905235 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905242 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905248 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905255 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905261 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905267 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905273 | orchestrator | 2026-04-09 00:54:31.905279 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:54:31.905285 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.490) 0:00:28.799 ******** 2026-04-09 00:54:31.905330 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905337 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905342 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905348 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905357 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905371 | orchestrator | 2026-04-09 00:54:31.905377 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:54:31.905384 | orchestrator | Thursday 09 April 2026 00:44:18 +0000 (0:00:00.568) 0:00:29.368 ******** 2026-04-09 00:54:31.905390 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905397 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.905403 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.905410 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.905417 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.905421 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.905425 | orchestrator | 2026-04-09 00:54:31.905429 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:54:31.905504 | orchestrator | Thursday 09 April 2026 00:44:19 +0000 (0:00:00.711) 0:00:30.079 ******** 2026-04-09 00:54:31.905514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211', 'dm-uuid-LVM-pe55oqTM5WXSDzjYyzUaRaqd3CMpaNVKPFQtec6Hf7WfksPCkUvb70pUeW8Rn5uq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167', 'dm-uuid-LVM-L06u4HG1Z8VsVrmrPMttrHEynsdWY5tYPTFkVlcRUFzwwxtPZYElKrlXtNfRtW43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.905722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x2LgyY-pFsN-CRjH-fIff-VqZQ-iJC0-uuKqoj', 'scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b', 'scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.905728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sSH1Wc-rYYU-IHt9-clfm-yIKH-cWs0-4yx0l1', 'scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f', 'scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.905738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c', 'scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.905743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.905747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e', 'dm-uuid-LVM-DHlmD4zM6t0CAqLBKIqYSjRilxlYUBpjQoqaKbAkGOzIRpf04OLgwBCKB1uAEule'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1', 'dm-uuid-LVM-oPQcAC3b4g0q6IF1gfNDxqKrQQ0gRj9dgMIxCG22CyIBdyjBvFCda05eHhbShhTC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905788 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.905795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.905829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380', 'dm-uuid-LVM-SVjSV9dQLY9i6LNd8kn9mbDKHPRpFDjaLiY1sw7Qqhj8B5em5drNdfrIXRXrBJsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54', 'dm-uuid-LVM-wwJgOyu1pTIB1IcZ0ixOqWljpfUKPIqNdd43eLv3qBvIeVziSCvGKQMWUepC7KsH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.906784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-php0qM-0Azd-kHee-TCGh-7MhG-Ev8e-m8IXL8', 'scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b', 'scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.906833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UvBVT8-BbxX-nFqu-R6Bp-Tkm7-HNbO-Iu1NbH', 'scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961', 'scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.906841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa', 'scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.906846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.906850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.906945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part1', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part14', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part15', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part16', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hmNGQz-IaRB-JT1G-Ibaq-MHss-JZrN-2V2na8', 'scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554', 'scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZ7sNY-nhf9-sUdm-OQ93-lYqN-j4aB-cnxbMZ', 'scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02', 'scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6', 'scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907165 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.907170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907325 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.907332 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.907337 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.907341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:54:31.907425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:54:31.907442 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.907448 | orchestrator | 2026-04-09 00:54:31.907454 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:54:31.907461 | orchestrator | Thursday 09 April 2026 00:44:21 +0000 (0:00:02.122) 0:00:32.202 ******** 2026-04-09 00:54:31.907511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211', 'dm-uuid-LVM-pe55oqTM5WXSDzjYyzUaRaqd3CMpaNVKPFQtec6Hf7WfksPCkUvb70pUeW8Rn5uq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167', 'dm-uuid-LVM-L06u4HG1Z8VsVrmrPMttrHEynsdWY5tYPTFkVlcRUFzwwxtPZYElKrlXtNfRtW43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e', 'dm-uuid-LVM-DHlmD4zM6t0CAqLBKIqYSjRilxlYUBpjQoqaKbAkGOzIRpf04OLgwBCKB1uAEule'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1', 'dm-uuid-LVM-oPQcAC3b4g0q6IF1gfNDxqKrQQ0gRj9dgMIxCG22CyIBdyjBvFCda05eHhbShhTC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x2LgyY-pFsN-CRjH-fIff-VqZQ-iJC0-uuKqoj', 'scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b', 'scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907704 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380', 'dm-uuid-LVM-SVjSV9dQLY9i6LNd8kn9mbDKHPRpFDjaLiY1sw7Qqhj8B5em5drNdfrIXRXrBJsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sSH1Wc-rYYU-IHt9-clfm-yIKH-cWs0-4yx0l1', 'scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f', 'scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54', 'dm-uuid-LVM-wwJgOyu1pTIB1IcZ0ixOqWljpfUKPIqNdd43eLv3qBvIeVziSCvGKQMWUepC7KsH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c', 'scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907835 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hmNGQz-IaRB-JT1G-Ibaq-MHss-JZrN-2V2na8', 'scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554', 'scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.907994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZ7sNY-nhf9-sUdm-OQ93-lYqN-j4aB-cnxbMZ', 'scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02', 'scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6', 'scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908085 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.908092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-php0qM-0Azd-kHee-TCGh-7MhG-Ev8e-m8IXL8', 'scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b', 'scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UvBVT8-BbxX-nFqu-R6Bp-Tkm7-HNbO-Iu1NbH', 'scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961', 'scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa', 'scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908218 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908222 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908233 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908237 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908241 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908278 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908287 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part1', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part14', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part15', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part16', 'scsi-SQEMU_QEMU_HARDDISK_5052af24-97ad-428a-a556-7be1e7d9033f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908295 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908302 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.908308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908384 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908391 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908435 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908448 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_3cfb3d4b-a336-425e-b827-5a144578e3d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908460 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.908472 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.908477 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.908506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908556 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908570 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908598 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908605 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908611 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908657 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908674 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbe1802f-7171-48ed-9202-61a04dc54e1c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908688 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:54:31.908695 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.908712 | orchestrator | 2026-04-09 00:54:31.908720 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:54:31.908728 | orchestrator | Thursday 09 April 2026 00:44:23 +0000 (0:00:02.416) 0:00:34.619 ******** 2026-04-09 00:54:31.908734 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.908740 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.908747 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.908753 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.908758 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.908764 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.908771 | orchestrator | 2026-04-09 00:54:31.908777 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:54:31.908783 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:01.294) 0:00:35.913 ******** 2026-04-09 00:54:31.908789 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.908796 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.908803 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.908809 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.908815 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.908821 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.908828 | orchestrator | 2026-04-09 00:54:31.908834 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:54:31.908840 | orchestrator | Thursday 09 April 2026 00:44:25 +0000 (0:00:00.737) 0:00:36.651 ******** 2026-04-09 00:54:31.908913 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.908924 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.908930 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.908936 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.908954 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.908981 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.908988 | orchestrator | 2026-04-09 00:54:31.908995 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:54:31.909001 | orchestrator | Thursday 09 April 2026 00:44:26 +0000 (0:00:00.547) 0:00:37.199 ******** 2026-04-09 00:54:31.909005 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909009 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909012 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909016 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909020 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909024 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909028 | orchestrator | 2026-04-09 00:54:31.909036 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:54:31.909042 | orchestrator | Thursday 09 April 2026 00:44:27 +0000 (0:00:01.351) 0:00:38.550 ******** 2026-04-09 00:54:31.909049 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909055 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909061 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909067 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909074 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909080 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909087 | orchestrator | 2026-04-09 00:54:31.909093 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:54:31.909100 | orchestrator | Thursday 09 April 2026 00:44:28 +0000 (0:00:01.153) 0:00:39.703 ******** 2026-04-09 00:54:31.909107 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909113 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909119 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909126 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909130 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909134 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909137 | orchestrator | 2026-04-09 00:54:31.909141 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:54:31.909145 | orchestrator | Thursday 09 April 2026 00:44:30 +0000 (0:00:01.651) 0:00:41.354 ******** 2026-04-09 00:54:31.909149 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:54:31.909154 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:54:31.909158 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:54:31.909162 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:54:31.909166 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:54:31.909170 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:54:31.909174 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:54:31.909178 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:54:31.909182 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:54:31.909185 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-09 00:54:31.909189 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-09 00:54:31.909193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-09 00:54:31.909197 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:54:31.909201 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-09 00:54:31.909205 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-09 00:54:31.909209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-09 00:54:31.909213 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-09 00:54:31.909216 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-09 00:54:31.909220 | orchestrator | 2026-04-09 00:54:31.909224 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:54:31.909228 | orchestrator | Thursday 09 April 2026 00:44:33 +0000 (0:00:02.716) 0:00:44.071 ******** 2026-04-09 00:54:31.909236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:54:31.909240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:54:31.909244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:54:31.909248 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:54:31.909256 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:54:31.909260 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:54:31.909264 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:54:31.909271 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:54:31.909275 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:54:31.909279 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:54:31.909287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:54:31.909291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:54:31.909295 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-09 00:54:31.909299 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-09 00:54:31.909303 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-09 00:54:31.909307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909311 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909315 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-09 00:54:31.909319 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-09 00:54:31.909322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-09 00:54:31.909347 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909352 | orchestrator | 2026-04-09 00:54:31.909356 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:54:31.909360 | orchestrator | Thursday 09 April 2026 00:44:34 +0000 (0:00:01.483) 0:00:45.555 ******** 2026-04-09 00:54:31.909364 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909368 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909372 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909377 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.909381 | orchestrator | 2026-04-09 00:54:31.909385 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:54:31.909392 | orchestrator | Thursday 09 April 2026 00:44:36 +0000 (0:00:02.043) 0:00:47.598 ******** 2026-04-09 00:54:31.909397 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909401 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909404 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909408 | orchestrator | 2026-04-09 00:54:31.909412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:54:31.909416 | orchestrator | Thursday 09 April 2026 00:44:37 +0000 (0:00:00.337) 0:00:47.936 ******** 2026-04-09 00:54:31.909420 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909432 | orchestrator | 2026-04-09 00:54:31.909436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:54:31.909440 | orchestrator | Thursday 09 April 2026 00:44:37 +0000 (0:00:00.335) 0:00:48.271 ******** 2026-04-09 00:54:31.909444 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909448 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909452 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909459 | orchestrator | 2026-04-09 00:54:31.909463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:54:31.909467 | orchestrator | Thursday 09 April 2026 00:44:38 +0000 (0:00:00.716) 0:00:48.988 ******** 2026-04-09 00:54:31.909471 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.909475 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.909479 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.909483 | orchestrator | 2026-04-09 00:54:31.909487 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:54:31.909491 | orchestrator | Thursday 09 April 2026 00:44:39 +0000 (0:00:01.537) 0:00:50.526 ******** 2026-04-09 00:54:31.909495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.909499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.909503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.909507 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909510 | orchestrator | 2026-04-09 00:54:31.909514 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:54:31.909519 | orchestrator | Thursday 09 April 2026 00:44:40 +0000 (0:00:00.440) 0:00:50.966 ******** 2026-04-09 00:54:31.909522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.909527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.909531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.909534 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909539 | orchestrator | 2026-04-09 00:54:31.909543 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:54:31.909548 | orchestrator | Thursday 09 April 2026 00:44:40 +0000 (0:00:00.425) 0:00:51.391 ******** 2026-04-09 00:54:31.909553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.909557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.909562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.909566 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909571 | orchestrator | 2026-04-09 00:54:31.909575 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:54:31.909580 | orchestrator | Thursday 09 April 2026 00:44:41 +0000 (0:00:00.357) 0:00:51.749 ******** 2026-04-09 00:54:31.909584 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.909589 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.909593 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.909598 | orchestrator | 2026-04-09 00:54:31.909602 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:54:31.909607 | orchestrator | Thursday 09 April 2026 00:44:41 +0000 (0:00:00.387) 0:00:52.137 ******** 2026-04-09 00:54:31.909612 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:54:31.909616 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:54:31.909621 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:54:31.909626 | orchestrator | 2026-04-09 00:54:31.909630 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:54:31.909635 | orchestrator | Thursday 09 April 2026 00:44:42 +0000 (0:00:01.055) 0:00:53.192 ******** 2026-04-09 00:54:31.909640 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:54:31.909644 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.909649 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.909653 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:54:31.909658 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:54:31.909663 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:54:31.909668 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:54:31.909675 | orchestrator | 2026-04-09 00:54:31.909694 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:54:31.909701 | orchestrator | Thursday 09 April 2026 00:44:43 +0000 (0:00:00.972) 0:00:54.164 ******** 2026-04-09 00:54:31.909708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:54:31.909715 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.909721 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.909727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:54:31.909733 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:54:31.909743 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:54:31.909750 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:54:31.909756 | orchestrator | 2026-04-09 00:54:31.909763 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.909770 | orchestrator | Thursday 09 April 2026 00:44:45 +0000 (0:00:02.520) 0:00:56.685 ******** 2026-04-09 00:54:31.909777 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.909785 | orchestrator | 2026-04-09 00:54:31.909792 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.909798 | orchestrator | Thursday 09 April 2026 00:44:47 +0000 (0:00:01.074) 0:00:57.759 ******** 2026-04-09 00:54:31.909803 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.909807 | orchestrator | 2026-04-09 00:54:31.909812 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.909817 | orchestrator | Thursday 09 April 2026 00:44:48 +0000 (0:00:01.105) 0:00:58.865 ******** 2026-04-09 00:54:31.909821 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.909826 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.909831 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.909835 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.909840 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.909844 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.909848 | orchestrator | 2026-04-09 00:54:31.909852 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.909856 | orchestrator | Thursday 09 April 2026 00:44:49 +0000 (0:00:00.968) 0:00:59.833 ******** 2026-04-09 00:54:31.909860 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.909864 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.909868 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909871 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909875 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909879 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.909883 | orchestrator | 2026-04-09 00:54:31.909887 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.909891 | orchestrator | Thursday 09 April 2026 00:44:50 +0000 (0:00:01.064) 0:01:00.897 ******** 2026-04-09 00:54:31.909895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909899 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.909903 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909907 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909911 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.909915 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.909918 | orchestrator | 2026-04-09 00:54:31.909922 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.909930 | orchestrator | Thursday 09 April 2026 00:44:50 +0000 (0:00:00.701) 0:01:01.599 ******** 2026-04-09 00:54:31.909934 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.909938 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.909942 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.909945 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.909949 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.909953 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.909984 | orchestrator | 2026-04-09 00:54:31.909990 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.909994 | orchestrator | Thursday 09 April 2026 00:44:51 +0000 (0:00:00.792) 0:01:02.392 ******** 2026-04-09 00:54:31.909998 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910002 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910006 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910010 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910035 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910039 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910043 | orchestrator | 2026-04-09 00:54:31.910047 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.910051 | orchestrator | Thursday 09 April 2026 00:44:52 +0000 (0:00:00.943) 0:01:03.336 ******** 2026-04-09 00:54:31.910055 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910059 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910063 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910067 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910071 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910075 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910079 | orchestrator | 2026-04-09 00:54:31.910083 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.910087 | orchestrator | Thursday 09 April 2026 00:44:53 +0000 (0:00:00.908) 0:01:04.244 ******** 2026-04-09 00:54:31.910091 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910095 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910098 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910102 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910106 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910110 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910114 | orchestrator | 2026-04-09 00:54:31.910136 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.910141 | orchestrator | Thursday 09 April 2026 00:44:54 +0000 (0:00:00.559) 0:01:04.804 ******** 2026-04-09 00:54:31.910145 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910149 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910153 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910157 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910161 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910165 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910169 | orchestrator | 2026-04-09 00:54:31.910173 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.910177 | orchestrator | Thursday 09 April 2026 00:44:55 +0000 (0:00:01.287) 0:01:06.092 ******** 2026-04-09 00:54:31.910181 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910184 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910188 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910192 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910199 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910203 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910207 | orchestrator | 2026-04-09 00:54:31.910211 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.910215 | orchestrator | Thursday 09 April 2026 00:44:56 +0000 (0:00:01.298) 0:01:07.390 ******** 2026-04-09 00:54:31.910219 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910223 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910227 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910234 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910238 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910242 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910246 | orchestrator | 2026-04-09 00:54:31.910250 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.910254 | orchestrator | Thursday 09 April 2026 00:44:57 +0000 (0:00:01.192) 0:01:08.582 ******** 2026-04-09 00:54:31.910258 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910262 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910266 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910270 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910274 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910278 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910282 | orchestrator | 2026-04-09 00:54:31.910286 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.910290 | orchestrator | Thursday 09 April 2026 00:44:58 +0000 (0:00:00.605) 0:01:09.188 ******** 2026-04-09 00:54:31.910294 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910297 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910301 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910305 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910309 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910313 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910317 | orchestrator | 2026-04-09 00:54:31.910321 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.910325 | orchestrator | Thursday 09 April 2026 00:44:59 +0000 (0:00:00.587) 0:01:09.776 ******** 2026-04-09 00:54:31.910329 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910333 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910337 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910341 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910345 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910349 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910353 | orchestrator | 2026-04-09 00:54:31.910359 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.910365 | orchestrator | Thursday 09 April 2026 00:44:59 +0000 (0:00:00.643) 0:01:10.419 ******** 2026-04-09 00:54:31.910372 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910378 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910384 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910388 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910392 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910396 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910400 | orchestrator | 2026-04-09 00:54:31.910404 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.910408 | orchestrator | Thursday 09 April 2026 00:45:00 +0000 (0:00:00.693) 0:01:11.113 ******** 2026-04-09 00:54:31.910412 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910416 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910420 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910423 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910427 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910431 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910435 | orchestrator | 2026-04-09 00:54:31.910439 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.910443 | orchestrator | Thursday 09 April 2026 00:45:00 +0000 (0:00:00.546) 0:01:11.659 ******** 2026-04-09 00:54:31.910447 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910451 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910455 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910459 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910463 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910467 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910471 | orchestrator | 2026-04-09 00:54:31.910481 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.910485 | orchestrator | Thursday 09 April 2026 00:45:01 +0000 (0:00:00.746) 0:01:12.405 ******** 2026-04-09 00:54:31.910489 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910493 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910497 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910501 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910505 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910509 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910513 | orchestrator | 2026-04-09 00:54:31.910517 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.910521 | orchestrator | Thursday 09 April 2026 00:45:02 +0000 (0:00:00.744) 0:01:13.149 ******** 2026-04-09 00:54:31.910525 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910529 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910533 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910536 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910540 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910544 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910548 | orchestrator | 2026-04-09 00:54:31.910566 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.910570 | orchestrator | Thursday 09 April 2026 00:45:03 +0000 (0:00:00.803) 0:01:13.953 ******** 2026-04-09 00:54:31.910575 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.910578 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.910582 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.910586 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.910590 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.910594 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.910598 | orchestrator | 2026-04-09 00:54:31.910602 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-09 00:54:31.910606 | orchestrator | Thursday 09 April 2026 00:45:04 +0000 (0:00:01.324) 0:01:15.278 ******** 2026-04-09 00:54:31.910610 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.910614 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.910618 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.910625 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.910629 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.910633 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.910636 | orchestrator | 2026-04-09 00:54:31.910640 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-09 00:54:31.910644 | orchestrator | Thursday 09 April 2026 00:45:06 +0000 (0:00:01.495) 0:01:16.773 ******** 2026-04-09 00:54:31.910648 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.910652 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.910656 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.910660 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.910664 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.910668 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.910672 | orchestrator | 2026-04-09 00:54:31.910676 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-09 00:54:31.910680 | orchestrator | Thursday 09 April 2026 00:45:09 +0000 (0:00:03.520) 0:01:20.293 ******** 2026-04-09 00:54:31.910684 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.910688 | orchestrator | 2026-04-09 00:54:31.910692 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-09 00:54:31.910696 | orchestrator | Thursday 09 April 2026 00:45:10 +0000 (0:00:01.228) 0:01:21.522 ******** 2026-04-09 00:54:31.910700 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910704 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910709 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910716 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910727 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910734 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910740 | orchestrator | 2026-04-09 00:54:31.910745 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-09 00:54:31.910752 | orchestrator | Thursday 09 April 2026 00:45:11 +0000 (0:00:00.842) 0:01:22.364 ******** 2026-04-09 00:54:31.910757 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910763 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.910769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.910776 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.910781 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.910787 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.910793 | orchestrator | 2026-04-09 00:54:31.910800 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-09 00:54:31.910806 | orchestrator | Thursday 09 April 2026 00:45:12 +0000 (0:00:00.640) 0:01:23.005 ******** 2026-04-09 00:54:31.910812 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910819 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910825 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910832 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910839 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910846 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910851 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910857 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-09 00:54:31.910863 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910869 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910876 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910883 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-09 00:54:31.910890 | orchestrator | 2026-04-09 00:54:31.910897 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-09 00:54:31.910904 | orchestrator | Thursday 09 April 2026 00:45:13 +0000 (0:00:01.547) 0:01:24.552 ******** 2026-04-09 00:54:31.910909 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.910915 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.910921 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.910927 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.910934 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.910940 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.910947 | orchestrator | 2026-04-09 00:54:31.910954 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-09 00:54:31.910974 | orchestrator | Thursday 09 April 2026 00:45:14 +0000 (0:00:00.960) 0:01:25.513 ******** 2026-04-09 00:54:31.910981 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.910987 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911021 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911030 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911036 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911042 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911048 | orchestrator | 2026-04-09 00:54:31.911055 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-09 00:54:31.911062 | orchestrator | Thursday 09 April 2026 00:45:15 +0000 (0:00:00.772) 0:01:26.286 ******** 2026-04-09 00:54:31.911069 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911076 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911090 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911098 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911118 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911125 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911132 | orchestrator | 2026-04-09 00:54:31.911139 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-09 00:54:31.911150 | orchestrator | Thursday 09 April 2026 00:45:15 +0000 (0:00:00.432) 0:01:26.719 ******** 2026-04-09 00:54:31.911156 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911163 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911169 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911175 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911181 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911187 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911193 | orchestrator | 2026-04-09 00:54:31.911200 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-09 00:54:31.911215 | orchestrator | Thursday 09 April 2026 00:45:16 +0000 (0:00:00.597) 0:01:27.317 ******** 2026-04-09 00:54:31.911223 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.911231 | orchestrator | 2026-04-09 00:54:31.911237 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-09 00:54:31.911243 | orchestrator | Thursday 09 April 2026 00:45:17 +0000 (0:00:00.912) 0:01:28.229 ******** 2026-04-09 00:54:31.911249 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.911255 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.911261 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.911267 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.911274 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.911280 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.911286 | orchestrator | 2026-04-09 00:54:31.911292 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-09 00:54:31.911299 | orchestrator | Thursday 09 April 2026 00:46:13 +0000 (0:00:56.336) 0:02:24.565 ******** 2026-04-09 00:54:31.911306 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911311 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911315 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911319 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911323 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911327 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911331 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911335 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911339 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911343 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911347 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911351 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911355 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911359 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911362 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911366 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911370 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911374 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911385 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911389 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911393 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-09 00:54:31.911397 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-09 00:54:31.911400 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-09 00:54:31.911404 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911408 | orchestrator | 2026-04-09 00:54:31.911412 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-09 00:54:31.911416 | orchestrator | Thursday 09 April 2026 00:46:14 +0000 (0:00:00.746) 0:02:25.312 ******** 2026-04-09 00:54:31.911420 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911424 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911428 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911432 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911436 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911440 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911444 | orchestrator | 2026-04-09 00:54:31.911448 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-09 00:54:31.911451 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.527) 0:02:25.839 ******** 2026-04-09 00:54:31.911455 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911459 | orchestrator | 2026-04-09 00:54:31.911490 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-09 00:54:31.911495 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.120) 0:02:25.959 ******** 2026-04-09 00:54:31.911499 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911504 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911512 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911515 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911519 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911523 | orchestrator | 2026-04-09 00:54:31.911527 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-09 00:54:31.911531 | orchestrator | Thursday 09 April 2026 00:46:15 +0000 (0:00:00.687) 0:02:26.647 ******** 2026-04-09 00:54:31.911535 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911543 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911547 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911551 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911555 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911561 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911567 | orchestrator | 2026-04-09 00:54:31.911573 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-09 00:54:31.911580 | orchestrator | Thursday 09 April 2026 00:46:16 +0000 (0:00:00.519) 0:02:27.166 ******** 2026-04-09 00:54:31.911587 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911594 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911600 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911607 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911614 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911618 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911622 | orchestrator | 2026-04-09 00:54:31.911626 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-09 00:54:31.911630 | orchestrator | Thursday 09 April 2026 00:46:17 +0000 (0:00:00.695) 0:02:27.861 ******** 2026-04-09 00:54:31.911634 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.911638 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.911642 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.911646 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.911649 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.911653 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.911662 | orchestrator | 2026-04-09 00:54:31.911666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-09 00:54:31.911670 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:02.931) 0:02:30.792 ******** 2026-04-09 00:54:31.911673 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.911677 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.911682 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.911685 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.911689 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.911693 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.911697 | orchestrator | 2026-04-09 00:54:31.911701 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-09 00:54:31.911705 | orchestrator | Thursday 09 April 2026 00:46:20 +0000 (0:00:00.643) 0:02:31.435 ******** 2026-04-09 00:54:31.911709 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.911714 | orchestrator | 2026-04-09 00:54:31.911718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-09 00:54:31.911722 | orchestrator | Thursday 09 April 2026 00:46:21 +0000 (0:00:01.082) 0:02:32.518 ******** 2026-04-09 00:54:31.911726 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911730 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911734 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911738 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911742 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911745 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911749 | orchestrator | 2026-04-09 00:54:31.911753 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-09 00:54:31.911757 | orchestrator | Thursday 09 April 2026 00:46:22 +0000 (0:00:00.568) 0:02:33.087 ******** 2026-04-09 00:54:31.911761 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911765 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911769 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911773 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911777 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911781 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911784 | orchestrator | 2026-04-09 00:54:31.911788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-09 00:54:31.911792 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:00.679) 0:02:33.766 ******** 2026-04-09 00:54:31.911796 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911800 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911804 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911808 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911812 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911819 | orchestrator | 2026-04-09 00:54:31.911823 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-09 00:54:31.911827 | orchestrator | Thursday 09 April 2026 00:46:23 +0000 (0:00:00.598) 0:02:34.364 ******** 2026-04-09 00:54:31.911831 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911835 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911839 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911843 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911847 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911850 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911854 | orchestrator | 2026-04-09 00:54:31.911858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-09 00:54:31.911862 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:00.667) 0:02:35.031 ******** 2026-04-09 00:54:31.911866 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911870 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911877 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911896 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911900 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911904 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911908 | orchestrator | 2026-04-09 00:54:31.911912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-09 00:54:31.911916 | orchestrator | Thursday 09 April 2026 00:46:24 +0000 (0:00:00.609) 0:02:35.641 ******** 2026-04-09 00:54:31.911920 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.911924 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.911928 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.911931 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.911935 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.911939 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.911943 | orchestrator | 2026-04-09 00:54:31.911947 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-09 00:54:31.911953 | orchestrator | Thursday 09 April 2026 00:46:25 +0000 (0:00:00.787) 0:02:36.428 ******** 2026-04-09 00:54:31.911997 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912003 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912007 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912011 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912015 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912019 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912023 | orchestrator | 2026-04-09 00:54:31.912027 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-09 00:54:31.912031 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.539) 0:02:36.968 ******** 2026-04-09 00:54:31.912035 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912039 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912042 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912046 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912050 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912054 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912058 | orchestrator | 2026-04-09 00:54:31.912062 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-09 00:54:31.912066 | orchestrator | Thursday 09 April 2026 00:46:26 +0000 (0:00:00.623) 0:02:37.592 ******** 2026-04-09 00:54:31.912070 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.912074 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.912078 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.912082 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.912085 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.912089 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.912093 | orchestrator | 2026-04-09 00:54:31.912097 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-09 00:54:31.912101 | orchestrator | Thursday 09 April 2026 00:46:27 +0000 (0:00:00.956) 0:02:38.549 ******** 2026-04-09 00:54:31.912105 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.912110 | orchestrator | 2026-04-09 00:54:31.912114 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-09 00:54:31.912118 | orchestrator | Thursday 09 April 2026 00:46:28 +0000 (0:00:01.069) 0:02:39.618 ******** 2026-04-09 00:54:31.912122 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-09 00:54:31.912126 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912130 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-09 00:54:31.912134 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-09 00:54:31.912138 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-09 00:54:31.912142 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912153 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-09 00:54:31.912157 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912161 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912165 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-09 00:54:31.912169 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912173 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912177 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912181 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912185 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912189 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-09 00:54:31.912197 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912201 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912205 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-09 00:54:31.912216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912220 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912224 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912232 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912236 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-09 00:54:31.912240 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912246 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912271 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912279 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912298 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-09 00:54:31.912304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912311 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912318 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912324 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-09 00:54:31.912346 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912350 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912354 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912358 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912362 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912366 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-09 00:54:31.912369 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912373 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-09 00:54:31.912389 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912393 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912397 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912400 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912404 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912408 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912412 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-09 00:54:31.912416 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912420 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912424 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912428 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-09 00:54:31.912432 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912436 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-09 00:54:31.912440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912444 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912448 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912451 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-09 00:54:31.912455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912459 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-09 00:54:31.912463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912467 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912475 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912479 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-09 00:54:31.912483 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912499 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-09 00:54:31.912502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-09 00:54:31.912522 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-09 00:54:31.912526 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-09 00:54:31.912545 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-09 00:54:31.912552 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-09 00:54:31.912557 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-09 00:54:31.912560 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-09 00:54:31.912565 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-09 00:54:31.912568 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-09 00:54:31.912572 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-09 00:54:31.912576 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-09 00:54:31.912580 | orchestrator | 2026-04-09 00:54:31.912584 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-09 00:54:31.912590 | orchestrator | Thursday 09 April 2026 00:46:35 +0000 (0:00:06.365) 0:02:45.984 ******** 2026-04-09 00:54:31.912594 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912598 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912607 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.912611 | orchestrator | 2026-04-09 00:54:31.912615 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-09 00:54:31.912619 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:00.928) 0:02:46.912 ******** 2026-04-09 00:54:31.912623 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912627 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912631 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912635 | orchestrator | 2026-04-09 00:54:31.912639 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-09 00:54:31.912643 | orchestrator | Thursday 09 April 2026 00:46:36 +0000 (0:00:00.714) 0:02:47.627 ******** 2026-04-09 00:54:31.912647 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912651 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912655 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.912658 | orchestrator | 2026-04-09 00:54:31.912662 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-09 00:54:31.912666 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:01.198) 0:02:48.826 ******** 2026-04-09 00:54:31.912670 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.912674 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.912678 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.912682 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912686 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912690 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912694 | orchestrator | 2026-04-09 00:54:31.912698 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-09 00:54:31.912702 | orchestrator | Thursday 09 April 2026 00:46:38 +0000 (0:00:00.658) 0:02:49.484 ******** 2026-04-09 00:54:31.912706 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.912710 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.912714 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.912718 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912722 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912726 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912730 | orchestrator | 2026-04-09 00:54:31.912734 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-09 00:54:31.912772 | orchestrator | Thursday 09 April 2026 00:46:39 +0000 (0:00:00.581) 0:02:50.066 ******** 2026-04-09 00:54:31.912776 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912780 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912784 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912788 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912792 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912796 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912800 | orchestrator | 2026-04-09 00:54:31.912804 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-09 00:54:31.912808 | orchestrator | Thursday 09 April 2026 00:46:40 +0000 (0:00:00.690) 0:02:50.757 ******** 2026-04-09 00:54:31.912812 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912816 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912820 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912823 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912827 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912831 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912835 | orchestrator | 2026-04-09 00:54:31.912839 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-09 00:54:31.912843 | orchestrator | Thursday 09 April 2026 00:46:40 +0000 (0:00:00.592) 0:02:51.349 ******** 2026-04-09 00:54:31.912847 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912851 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912855 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912859 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912863 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912867 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912870 | orchestrator | 2026-04-09 00:54:31.912876 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-09 00:54:31.912899 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:00.668) 0:02:52.017 ******** 2026-04-09 00:54:31.912907 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912913 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912919 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.912926 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912931 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.912938 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.912944 | orchestrator | 2026-04-09 00:54:31.912950 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-09 00:54:31.912956 | orchestrator | Thursday 09 April 2026 00:46:41 +0000 (0:00:00.531) 0:02:52.549 ******** 2026-04-09 00:54:31.912976 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.912982 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.912988 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.912997 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913003 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913010 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913016 | orchestrator | 2026-04-09 00:54:31.913023 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-09 00:54:31.913030 | orchestrator | Thursday 09 April 2026 00:46:42 +0000 (0:00:00.731) 0:02:53.280 ******** 2026-04-09 00:54:31.913036 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913043 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913049 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913055 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913062 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913074 | orchestrator | 2026-04-09 00:54:31.913080 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-09 00:54:31.913087 | orchestrator | Thursday 09 April 2026 00:46:43 +0000 (0:00:00.608) 0:02:53.889 ******** 2026-04-09 00:54:31.913100 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913107 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913113 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913119 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.913126 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.913133 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.913139 | orchestrator | 2026-04-09 00:54:31.913146 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-09 00:54:31.913153 | orchestrator | Thursday 09 April 2026 00:46:45 +0000 (0:00:02.416) 0:02:56.306 ******** 2026-04-09 00:54:31.913159 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.913165 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.913172 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.913179 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913185 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913191 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913196 | orchestrator | 2026-04-09 00:54:31.913202 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-09 00:54:31.913209 | orchestrator | Thursday 09 April 2026 00:46:46 +0000 (0:00:00.828) 0:02:57.134 ******** 2026-04-09 00:54:31.913215 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.913222 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.913229 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.913236 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913240 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913244 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913248 | orchestrator | 2026-04-09 00:54:31.913252 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-09 00:54:31.913256 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.636) 0:02:57.770 ******** 2026-04-09 00:54:31.913260 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913264 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913267 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913271 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913279 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913283 | orchestrator | 2026-04-09 00:54:31.913287 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-09 00:54:31.913291 | orchestrator | Thursday 09 April 2026 00:46:47 +0000 (0:00:00.856) 0:02:58.627 ******** 2026-04-09 00:54:31.913295 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.913299 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.913303 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.913307 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913311 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913314 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913318 | orchestrator | 2026-04-09 00:54:31.913322 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-09 00:54:31.913326 | orchestrator | Thursday 09 April 2026 00:46:48 +0000 (0:00:00.679) 0:02:59.307 ******** 2026-04-09 00:54:31.913331 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-09 00:54:31.913360 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-09 00:54:31.913370 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913374 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-09 00:54:31.913381 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-09 00:54:31.913386 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913390 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-09 00:54:31.913394 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-09 00:54:31.913398 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913402 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913406 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913410 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913414 | orchestrator | 2026-04-09 00:54:31.913418 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-09 00:54:31.913422 | orchestrator | Thursday 09 April 2026 00:46:49 +0000 (0:00:01.084) 0:03:00.392 ******** 2026-04-09 00:54:31.913426 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913430 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913434 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913438 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913442 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913446 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913450 | orchestrator | 2026-04-09 00:54:31.913454 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-09 00:54:31.913458 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.582) 0:03:00.975 ******** 2026-04-09 00:54:31.913462 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913466 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913470 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913474 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913478 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913484 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913490 | orchestrator | 2026-04-09 00:54:31.913496 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:54:31.913503 | orchestrator | Thursday 09 April 2026 00:46:50 +0000 (0:00:00.667) 0:03:01.642 ******** 2026-04-09 00:54:31.913509 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913514 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913527 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913532 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913538 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913545 | orchestrator | 2026-04-09 00:54:31.913551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:54:31.913562 | orchestrator | Thursday 09 April 2026 00:46:51 +0000 (0:00:00.570) 0:03:02.213 ******** 2026-04-09 00:54:31.913569 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913576 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913583 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913589 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913596 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913601 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913605 | orchestrator | 2026-04-09 00:54:31.913609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:54:31.913613 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:00.728) 0:03:02.941 ******** 2026-04-09 00:54:31.913617 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913621 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.913625 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.913629 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913639 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913645 | orchestrator | 2026-04-09 00:54:31.913651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:54:31.913657 | orchestrator | Thursday 09 April 2026 00:46:52 +0000 (0:00:00.479) 0:03:03.421 ******** 2026-04-09 00:54:31.913663 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.913669 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.913675 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.913681 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913687 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913694 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913700 | orchestrator | 2026-04-09 00:54:31.913707 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:54:31.913715 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:00.653) 0:03:04.075 ******** 2026-04-09 00:54:31.913745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.913753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.913759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.913765 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913772 | orchestrator | 2026-04-09 00:54:31.913779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:54:31.913785 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:00.359) 0:03:04.435 ******** 2026-04-09 00:54:31.913791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.913798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.913804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.913811 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913817 | orchestrator | 2026-04-09 00:54:31.913827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:54:31.913834 | orchestrator | Thursday 09 April 2026 00:46:53 +0000 (0:00:00.303) 0:03:04.738 ******** 2026-04-09 00:54:31.913839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.913845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.913852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.913859 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.913866 | orchestrator | 2026-04-09 00:54:31.913872 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:54:31.913879 | orchestrator | Thursday 09 April 2026 00:46:54 +0000 (0:00:00.433) 0:03:05.172 ******** 2026-04-09 00:54:31.913884 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.913888 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.913891 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.913895 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.913904 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.913908 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.913913 | orchestrator | 2026-04-09 00:54:31.913919 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:54:31.913926 | orchestrator | Thursday 09 April 2026 00:46:55 +0000 (0:00:00.796) 0:03:05.968 ******** 2026-04-09 00:54:31.913932 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:54:31.913940 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:54:31.913946 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-09 00:54:31.913953 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-09 00:54:31.913995 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:54:31.914004 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.914011 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.914039 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-09 00:54:31.914043 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.914047 | orchestrator | 2026-04-09 00:54:31.914051 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-09 00:54:31.914055 | orchestrator | Thursday 09 April 2026 00:46:56 +0000 (0:00:01.714) 0:03:07.682 ******** 2026-04-09 00:54:31.914059 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.914063 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.914066 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.914071 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.914077 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.914084 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.914090 | orchestrator | 2026-04-09 00:54:31.914097 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.914104 | orchestrator | Thursday 09 April 2026 00:46:59 +0000 (0:00:02.827) 0:03:10.510 ******** 2026-04-09 00:54:31.914110 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.914116 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.914122 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.914127 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.914132 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.914139 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.914145 | orchestrator | 2026-04-09 00:54:31.914151 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:54:31.914157 | orchestrator | Thursday 09 April 2026 00:47:01 +0000 (0:00:01.282) 0:03:11.793 ******** 2026-04-09 00:54:31.914164 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914171 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.914177 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.914184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.914189 | orchestrator | 2026-04-09 00:54:31.914194 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:54:31.914198 | orchestrator | Thursday 09 April 2026 00:47:01 +0000 (0:00:00.842) 0:03:12.636 ******** 2026-04-09 00:54:31.914202 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.914206 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.914210 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.914214 | orchestrator | 2026-04-09 00:54:31.914218 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:54:31.914222 | orchestrator | Thursday 09 April 2026 00:47:02 +0000 (0:00:00.291) 0:03:12.927 ******** 2026-04-09 00:54:31.914226 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.914230 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.914233 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.914237 | orchestrator | 2026-04-09 00:54:31.914241 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:54:31.914245 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:01.159) 0:03:14.086 ******** 2026-04-09 00:54:31.914249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:54:31.914258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:54:31.914262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:54:31.914266 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.914270 | orchestrator | 2026-04-09 00:54:31.914300 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:54:31.914308 | orchestrator | Thursday 09 April 2026 00:47:03 +0000 (0:00:00.534) 0:03:14.620 ******** 2026-04-09 00:54:31.914315 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.914322 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.914328 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.914335 | orchestrator | 2026-04-09 00:54:31.914341 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:54:31.914347 | orchestrator | Thursday 09 April 2026 00:47:04 +0000 (0:00:00.290) 0:03:14.911 ******** 2026-04-09 00:54:31.914353 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.914359 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.914365 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.914371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.914375 | orchestrator | 2026-04-09 00:54:31.914379 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:54:31.914383 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:00.887) 0:03:15.798 ******** 2026-04-09 00:54:31.914387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.914391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.914395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.914399 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914403 | orchestrator | 2026-04-09 00:54:31.914407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:54:31.914411 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:00.365) 0:03:16.164 ******** 2026-04-09 00:54:31.914415 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914419 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.914423 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.914427 | orchestrator | 2026-04-09 00:54:31.914433 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:54:31.914439 | orchestrator | Thursday 09 April 2026 00:47:05 +0000 (0:00:00.358) 0:03:16.523 ******** 2026-04-09 00:54:31.914445 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914452 | orchestrator | 2026-04-09 00:54:31.914459 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:54:31.914465 | orchestrator | Thursday 09 April 2026 00:47:06 +0000 (0:00:00.233) 0:03:16.756 ******** 2026-04-09 00:54:31.914471 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914478 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.914483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.914489 | orchestrator | 2026-04-09 00:54:31.914495 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:54:31.914502 | orchestrator | Thursday 09 April 2026 00:47:06 +0000 (0:00:00.542) 0:03:17.299 ******** 2026-04-09 00:54:31.914508 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914514 | orchestrator | 2026-04-09 00:54:31.914520 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:54:31.914526 | orchestrator | Thursday 09 April 2026 00:47:06 +0000 (0:00:00.346) 0:03:17.646 ******** 2026-04-09 00:54:31.914532 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914538 | orchestrator | 2026-04-09 00:54:31.914544 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:54:31.914551 | orchestrator | Thursday 09 April 2026 00:47:07 +0000 (0:00:01.020) 0:03:18.667 ******** 2026-04-09 00:54:31.914557 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914570 | orchestrator | 2026-04-09 00:54:31.914575 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:54:31.914578 | orchestrator | Thursday 09 April 2026 00:47:08 +0000 (0:00:00.190) 0:03:18.857 ******** 2026-04-09 00:54:31.914582 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914586 | orchestrator | 2026-04-09 00:54:31.914590 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:54:31.914594 | orchestrator | Thursday 09 April 2026 00:47:08 +0000 (0:00:00.298) 0:03:19.156 ******** 2026-04-09 00:54:31.914599 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914606 | orchestrator | 2026-04-09 00:54:31.914612 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:54:31.914618 | orchestrator | Thursday 09 April 2026 00:47:08 +0000 (0:00:00.212) 0:03:19.368 ******** 2026-04-09 00:54:31.914625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.914631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.914637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.914643 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914649 | orchestrator | 2026-04-09 00:54:31.914654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:54:31.914660 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:00.438) 0:03:19.806 ******** 2026-04-09 00:54:31.914666 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914672 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.914678 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.914685 | orchestrator | 2026-04-09 00:54:31.914691 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:54:31.914697 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:00.331) 0:03:20.137 ******** 2026-04-09 00:54:31.914704 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914710 | orchestrator | 2026-04-09 00:54:31.914717 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:54:31.914723 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:00.267) 0:03:20.405 ******** 2026-04-09 00:54:31.914730 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914736 | orchestrator | 2026-04-09 00:54:31.914763 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:54:31.914770 | orchestrator | Thursday 09 April 2026 00:47:09 +0000 (0:00:00.240) 0:03:20.645 ******** 2026-04-09 00:54:31.914777 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.914784 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.914816 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.914823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.914830 | orchestrator | 2026-04-09 00:54:31.914837 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:54:31.914844 | orchestrator | Thursday 09 April 2026 00:47:11 +0000 (0:00:01.276) 0:03:21.922 ******** 2026-04-09 00:54:31.914850 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.914857 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.914863 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.914869 | orchestrator | 2026-04-09 00:54:31.914876 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:54:31.914882 | orchestrator | Thursday 09 April 2026 00:47:11 +0000 (0:00:00.378) 0:03:22.300 ******** 2026-04-09 00:54:31.914889 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.914898 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.914905 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.914912 | orchestrator | 2026-04-09 00:54:31.914919 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:54:31.914925 | orchestrator | Thursday 09 April 2026 00:47:13 +0000 (0:00:01.444) 0:03:23.745 ******** 2026-04-09 00:54:31.914939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.914945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.914952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.914969 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.914976 | orchestrator | 2026-04-09 00:54:31.914983 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:54:31.914989 | orchestrator | Thursday 09 April 2026 00:47:13 +0000 (0:00:00.694) 0:03:24.439 ******** 2026-04-09 00:54:31.914995 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.915002 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.915009 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.915015 | orchestrator | 2026-04-09 00:54:31.915021 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:54:31.915028 | orchestrator | Thursday 09 April 2026 00:47:14 +0000 (0:00:00.340) 0:03:24.779 ******** 2026-04-09 00:54:31.915034 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915040 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915046 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915053 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.915059 | orchestrator | 2026-04-09 00:54:31.915065 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:54:31.915072 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:01.077) 0:03:25.856 ******** 2026-04-09 00:54:31.915078 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.915085 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.915092 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.915098 | orchestrator | 2026-04-09 00:54:31.915104 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:54:31.915111 | orchestrator | Thursday 09 April 2026 00:47:15 +0000 (0:00:00.366) 0:03:26.223 ******** 2026-04-09 00:54:31.915117 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.915123 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.915130 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.915136 | orchestrator | 2026-04-09 00:54:31.915142 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:54:31.915148 | orchestrator | Thursday 09 April 2026 00:47:16 +0000 (0:00:01.316) 0:03:27.539 ******** 2026-04-09 00:54:31.915155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.915162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.915169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.915175 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.915181 | orchestrator | 2026-04-09 00:54:31.915188 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:54:31.915194 | orchestrator | Thursday 09 April 2026 00:47:17 +0000 (0:00:01.057) 0:03:28.596 ******** 2026-04-09 00:54:31.915200 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.915207 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.915213 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.915220 | orchestrator | 2026-04-09 00:54:31.915226 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-09 00:54:31.915232 | orchestrator | Thursday 09 April 2026 00:47:18 +0000 (0:00:00.442) 0:03:29.039 ******** 2026-04-09 00:54:31.915239 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.915245 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.915252 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.915258 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915265 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915271 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915277 | orchestrator | 2026-04-09 00:54:31.915284 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:54:31.915290 | orchestrator | Thursday 09 April 2026 00:47:19 +0000 (0:00:01.077) 0:03:30.117 ******** 2026-04-09 00:54:31.915301 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.915307 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.915314 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.915320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.915327 | orchestrator | 2026-04-09 00:54:31.915334 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:54:31.915340 | orchestrator | Thursday 09 April 2026 00:47:20 +0000 (0:00:01.087) 0:03:31.205 ******** 2026-04-09 00:54:31.915346 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915353 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915359 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915366 | orchestrator | 2026-04-09 00:54:31.915372 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:54:31.915399 | orchestrator | Thursday 09 April 2026 00:47:20 +0000 (0:00:00.352) 0:03:31.557 ******** 2026-04-09 00:54:31.915407 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.915414 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.915420 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.915427 | orchestrator | 2026-04-09 00:54:31.915433 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:54:31.915439 | orchestrator | Thursday 09 April 2026 00:47:22 +0000 (0:00:01.243) 0:03:32.801 ******** 2026-04-09 00:54:31.915446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:54:31.915452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:54:31.915458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:54:31.915465 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915471 | orchestrator | 2026-04-09 00:54:31.915477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:54:31.915487 | orchestrator | Thursday 09 April 2026 00:47:23 +0000 (0:00:00.973) 0:03:33.775 ******** 2026-04-09 00:54:31.915494 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915500 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915507 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915513 | orchestrator | 2026-04-09 00:54:31.915520 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-09 00:54:31.915524 | orchestrator | 2026-04-09 00:54:31.915528 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.915532 | orchestrator | Thursday 09 April 2026 00:47:23 +0000 (0:00:00.867) 0:03:34.642 ******** 2026-04-09 00:54:31.915537 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.915541 | orchestrator | 2026-04-09 00:54:31.915545 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.915549 | orchestrator | Thursday 09 April 2026 00:47:24 +0000 (0:00:00.592) 0:03:35.235 ******** 2026-04-09 00:54:31.915553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.915557 | orchestrator | 2026-04-09 00:54:31.915561 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.915565 | orchestrator | Thursday 09 April 2026 00:47:25 +0000 (0:00:00.842) 0:03:36.078 ******** 2026-04-09 00:54:31.915569 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915573 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915577 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915581 | orchestrator | 2026-04-09 00:54:31.915585 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.915589 | orchestrator | Thursday 09 April 2026 00:47:26 +0000 (0:00:00.711) 0:03:36.789 ******** 2026-04-09 00:54:31.915593 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915597 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915607 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915611 | orchestrator | 2026-04-09 00:54:31.915615 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.915620 | orchestrator | Thursday 09 April 2026 00:47:26 +0000 (0:00:00.327) 0:03:37.117 ******** 2026-04-09 00:54:31.915626 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915633 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915639 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915645 | orchestrator | 2026-04-09 00:54:31.915651 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.915657 | orchestrator | Thursday 09 April 2026 00:47:26 +0000 (0:00:00.363) 0:03:37.480 ******** 2026-04-09 00:54:31.915663 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915670 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915676 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915683 | orchestrator | 2026-04-09 00:54:31.915690 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.915696 | orchestrator | Thursday 09 April 2026 00:47:27 +0000 (0:00:00.343) 0:03:37.824 ******** 2026-04-09 00:54:31.915703 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915709 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915716 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915722 | orchestrator | 2026-04-09 00:54:31.915729 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.915735 | orchestrator | Thursday 09 April 2026 00:47:28 +0000 (0:00:01.009) 0:03:38.834 ******** 2026-04-09 00:54:31.915741 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915747 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915753 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915759 | orchestrator | 2026-04-09 00:54:31.915766 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.915773 | orchestrator | Thursday 09 April 2026 00:47:28 +0000 (0:00:00.380) 0:03:39.215 ******** 2026-04-09 00:54:31.915780 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915786 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915792 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915796 | orchestrator | 2026-04-09 00:54:31.915799 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.915804 | orchestrator | Thursday 09 April 2026 00:47:28 +0000 (0:00:00.325) 0:03:39.541 ******** 2026-04-09 00:54:31.915808 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915812 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915815 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915819 | orchestrator | 2026-04-09 00:54:31.915823 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.915827 | orchestrator | Thursday 09 April 2026 00:47:29 +0000 (0:00:00.771) 0:03:40.312 ******** 2026-04-09 00:54:31.915831 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915835 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915839 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915843 | orchestrator | 2026-04-09 00:54:31.915847 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.915850 | orchestrator | Thursday 09 April 2026 00:47:30 +0000 (0:00:01.136) 0:03:41.449 ******** 2026-04-09 00:54:31.915854 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915878 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915883 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915887 | orchestrator | 2026-04-09 00:54:31.915891 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.915895 | orchestrator | Thursday 09 April 2026 00:47:31 +0000 (0:00:00.376) 0:03:41.825 ******** 2026-04-09 00:54:31.915899 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.915903 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.915906 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.915915 | orchestrator | 2026-04-09 00:54:31.915919 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.915923 | orchestrator | Thursday 09 April 2026 00:47:31 +0000 (0:00:00.451) 0:03:42.277 ******** 2026-04-09 00:54:31.915927 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.915931 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.915935 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.915939 | orchestrator | 2026-04-09 00:54:31.915948 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.915954 | orchestrator | Thursday 09 April 2026 00:47:31 +0000 (0:00:00.321) 0:03:42.598 ******** 2026-04-09 00:54:31.915997 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916004 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.916010 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.916016 | orchestrator | 2026-04-09 00:54:31.916022 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.916028 | orchestrator | Thursday 09 April 2026 00:47:32 +0000 (0:00:00.653) 0:03:43.252 ******** 2026-04-09 00:54:31.916035 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916041 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.916048 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.916053 | orchestrator | 2026-04-09 00:54:31.916059 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.916066 | orchestrator | Thursday 09 April 2026 00:47:32 +0000 (0:00:00.305) 0:03:43.558 ******** 2026-04-09 00:54:31.916072 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916078 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.916084 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.916090 | orchestrator | 2026-04-09 00:54:31.916096 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.916102 | orchestrator | Thursday 09 April 2026 00:47:33 +0000 (0:00:00.309) 0:03:43.867 ******** 2026-04-09 00:54:31.916107 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916113 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.916118 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.916124 | orchestrator | 2026-04-09 00:54:31.916129 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.916135 | orchestrator | Thursday 09 April 2026 00:47:33 +0000 (0:00:00.342) 0:03:44.209 ******** 2026-04-09 00:54:31.916141 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916146 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916152 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916157 | orchestrator | 2026-04-09 00:54:31.916162 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.916168 | orchestrator | Thursday 09 April 2026 00:47:33 +0000 (0:00:00.324) 0:03:44.534 ******** 2026-04-09 00:54:31.916174 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916180 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916186 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916192 | orchestrator | 2026-04-09 00:54:31.916198 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.916204 | orchestrator | Thursday 09 April 2026 00:47:34 +0000 (0:00:00.710) 0:03:45.245 ******** 2026-04-09 00:54:31.916210 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916216 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916222 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916228 | orchestrator | 2026-04-09 00:54:31.916234 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:54:31.916241 | orchestrator | Thursday 09 April 2026 00:47:35 +0000 (0:00:00.608) 0:03:45.853 ******** 2026-04-09 00:54:31.916248 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916254 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916261 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916268 | orchestrator | 2026-04-09 00:54:31.916275 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-09 00:54:31.916288 | orchestrator | Thursday 09 April 2026 00:47:35 +0000 (0:00:00.359) 0:03:46.213 ******** 2026-04-09 00:54:31.916295 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.916301 | orchestrator | 2026-04-09 00:54:31.916307 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-09 00:54:31.916314 | orchestrator | Thursday 09 April 2026 00:47:36 +0000 (0:00:00.978) 0:03:47.192 ******** 2026-04-09 00:54:31.916320 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916325 | orchestrator | 2026-04-09 00:54:31.916332 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-09 00:54:31.916338 | orchestrator | Thursday 09 April 2026 00:47:36 +0000 (0:00:00.156) 0:03:47.348 ******** 2026-04-09 00:54:31.916344 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-09 00:54:31.916351 | orchestrator | 2026-04-09 00:54:31.916357 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-09 00:54:31.916363 | orchestrator | Thursday 09 April 2026 00:47:37 +0000 (0:00:01.207) 0:03:48.556 ******** 2026-04-09 00:54:31.916369 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916376 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916381 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916385 | orchestrator | 2026-04-09 00:54:31.916389 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-09 00:54:31.916393 | orchestrator | Thursday 09 April 2026 00:47:38 +0000 (0:00:00.454) 0:03:49.011 ******** 2026-04-09 00:54:31.916401 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916411 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916417 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916424 | orchestrator | 2026-04-09 00:54:31.916430 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-09 00:54:31.916464 | orchestrator | Thursday 09 April 2026 00:47:38 +0000 (0:00:00.411) 0:03:49.422 ******** 2026-04-09 00:54:31.916471 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916477 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916483 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916488 | orchestrator | 2026-04-09 00:54:31.916493 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-09 00:54:31.916499 | orchestrator | Thursday 09 April 2026 00:47:40 +0000 (0:00:01.408) 0:03:50.831 ******** 2026-04-09 00:54:31.916504 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916509 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916515 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916520 | orchestrator | 2026-04-09 00:54:31.916526 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-09 00:54:31.916532 | orchestrator | Thursday 09 April 2026 00:47:41 +0000 (0:00:00.970) 0:03:51.801 ******** 2026-04-09 00:54:31.916538 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916549 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916555 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916561 | orchestrator | 2026-04-09 00:54:31.916567 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-09 00:54:31.916573 | orchestrator | Thursday 09 April 2026 00:47:41 +0000 (0:00:00.698) 0:03:52.500 ******** 2026-04-09 00:54:31.916580 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916586 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916593 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916599 | orchestrator | 2026-04-09 00:54:31.916606 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-09 00:54:31.916612 | orchestrator | Thursday 09 April 2026 00:47:42 +0000 (0:00:00.631) 0:03:53.132 ******** 2026-04-09 00:54:31.916618 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916624 | orchestrator | 2026-04-09 00:54:31.916629 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-09 00:54:31.916636 | orchestrator | Thursday 09 April 2026 00:47:43 +0000 (0:00:01.155) 0:03:54.288 ******** 2026-04-09 00:54:31.916648 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916655 | orchestrator | 2026-04-09 00:54:31.916661 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-09 00:54:31.916667 | orchestrator | Thursday 09 April 2026 00:47:44 +0000 (0:00:01.093) 0:03:55.382 ******** 2026-04-09 00:54:31.916673 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.916679 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.916686 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.916691 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:54:31.916697 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-09 00:54:31.916703 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:54:31.916708 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:54:31.916715 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-09 00:54:31.916721 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:54:31.916726 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-09 00:54:31.916732 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-09 00:54:31.916737 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-09 00:54:31.916743 | orchestrator | 2026-04-09 00:54:31.916749 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-09 00:54:31.916755 | orchestrator | Thursday 09 April 2026 00:47:47 +0000 (0:00:03.016) 0:03:58.398 ******** 2026-04-09 00:54:31.916761 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916767 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916773 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916796 | orchestrator | 2026-04-09 00:54:31.916803 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-09 00:54:31.916810 | orchestrator | Thursday 09 April 2026 00:47:48 +0000 (0:00:01.011) 0:03:59.410 ******** 2026-04-09 00:54:31.916816 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916822 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916828 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916835 | orchestrator | 2026-04-09 00:54:31.916840 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-09 00:54:31.916844 | orchestrator | Thursday 09 April 2026 00:47:49 +0000 (0:00:00.336) 0:03:59.746 ******** 2026-04-09 00:54:31.916847 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.916851 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.916855 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.916859 | orchestrator | 2026-04-09 00:54:31.916863 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-09 00:54:31.916867 | orchestrator | Thursday 09 April 2026 00:47:49 +0000 (0:00:00.365) 0:04:00.112 ******** 2026-04-09 00:54:31.916871 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916875 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916879 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916884 | orchestrator | 2026-04-09 00:54:31.916890 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-09 00:54:31.916896 | orchestrator | Thursday 09 April 2026 00:47:51 +0000 (0:00:01.986) 0:04:02.098 ******** 2026-04-09 00:54:31.916902 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.916907 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.916913 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.916919 | orchestrator | 2026-04-09 00:54:31.916925 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-09 00:54:31.916932 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:01.146) 0:04:03.244 ******** 2026-04-09 00:54:31.916940 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.916946 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.916952 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.916979 | orchestrator | 2026-04-09 00:54:31.916987 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-09 00:54:31.916994 | orchestrator | Thursday 09 April 2026 00:47:52 +0000 (0:00:00.282) 0:04:03.527 ******** 2026-04-09 00:54:31.917026 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917031 | orchestrator | 2026-04-09 00:54:31.917035 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-09 00:54:31.917039 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.661) 0:04:04.189 ******** 2026-04-09 00:54:31.917043 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917047 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917051 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917055 | orchestrator | 2026-04-09 00:54:31.917059 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-09 00:54:31.917062 | orchestrator | Thursday 09 April 2026 00:47:53 +0000 (0:00:00.280) 0:04:04.469 ******** 2026-04-09 00:54:31.917066 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917070 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917078 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917082 | orchestrator | 2026-04-09 00:54:31.917086 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-09 00:54:31.917090 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.299) 0:04:04.769 ******** 2026-04-09 00:54:31.917094 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917098 | orchestrator | 2026-04-09 00:54:31.917102 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-09 00:54:31.917106 | orchestrator | Thursday 09 April 2026 00:47:54 +0000 (0:00:00.472) 0:04:05.241 ******** 2026-04-09 00:54:31.917110 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.917114 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.917118 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.917122 | orchestrator | 2026-04-09 00:54:31.917125 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-09 00:54:31.917129 | orchestrator | Thursday 09 April 2026 00:47:56 +0000 (0:00:01.556) 0:04:06.797 ******** 2026-04-09 00:54:31.917133 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.917137 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.917141 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.917145 | orchestrator | 2026-04-09 00:54:31.917149 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-09 00:54:31.917153 | orchestrator | Thursday 09 April 2026 00:47:57 +0000 (0:00:01.044) 0:04:07.842 ******** 2026-04-09 00:54:31.917157 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.917161 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.917165 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.917169 | orchestrator | 2026-04-09 00:54:31.917173 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-09 00:54:31.917177 | orchestrator | Thursday 09 April 2026 00:47:58 +0000 (0:00:01.552) 0:04:09.395 ******** 2026-04-09 00:54:31.917181 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.917185 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.917188 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.917192 | orchestrator | 2026-04-09 00:54:31.917196 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-09 00:54:31.917200 | orchestrator | Thursday 09 April 2026 00:48:00 +0000 (0:00:01.856) 0:04:11.251 ******** 2026-04-09 00:54:31.917204 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917208 | orchestrator | 2026-04-09 00:54:31.917212 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-09 00:54:31.917216 | orchestrator | Thursday 09 April 2026 00:48:01 +0000 (0:00:00.942) 0:04:12.193 ******** 2026-04-09 00:54:31.917223 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-09 00:54:31.917227 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917230 | orchestrator | 2026-04-09 00:54:31.917234 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-09 00:54:31.917238 | orchestrator | Thursday 09 April 2026 00:48:23 +0000 (0:00:21.678) 0:04:33.871 ******** 2026-04-09 00:54:31.917242 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917246 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917250 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917254 | orchestrator | 2026-04-09 00:54:31.917258 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-09 00:54:31.917262 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:08.076) 0:04:41.948 ******** 2026-04-09 00:54:31.917266 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917270 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917274 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917277 | orchestrator | 2026-04-09 00:54:31.917281 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-09 00:54:31.917285 | orchestrator | Thursday 09 April 2026 00:48:31 +0000 (0:00:00.246) 0:04:42.194 ******** 2026-04-09 00:54:31.917290 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:54:31.917296 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-09 00:54:31.917313 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-09 00:54:31.917321 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-09 00:54:31.917325 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-09 00:54:31.917330 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a1e6338432ee530312636e360760ef47ee2ee2c'}])  2026-04-09 00:54:31.917335 | orchestrator | 2026-04-09 00:54:31.917339 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.917345 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:15.038) 0:04:57.233 ******** 2026-04-09 00:54:31.917349 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917353 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917357 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917361 | orchestrator | 2026-04-09 00:54:31.917365 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-09 00:54:31.917369 | orchestrator | Thursday 09 April 2026 00:48:46 +0000 (0:00:00.271) 0:04:57.504 ******** 2026-04-09 00:54:31.917374 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917379 | orchestrator | 2026-04-09 00:54:31.917383 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-09 00:54:31.917388 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.583) 0:04:58.087 ******** 2026-04-09 00:54:31.917392 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917397 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917401 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917406 | orchestrator | 2026-04-09 00:54:31.917411 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-09 00:54:31.917415 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.257) 0:04:58.345 ******** 2026-04-09 00:54:31.917420 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917425 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917429 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917434 | orchestrator | 2026-04-09 00:54:31.917439 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-09 00:54:31.917443 | orchestrator | Thursday 09 April 2026 00:48:47 +0000 (0:00:00.270) 0:04:58.616 ******** 2026-04-09 00:54:31.917448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:54:31.917454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:54:31.917460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:54:31.917467 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917473 | orchestrator | 2026-04-09 00:54:31.917479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-09 00:54:31.917485 | orchestrator | Thursday 09 April 2026 00:48:48 +0000 (0:00:00.729) 0:04:59.345 ******** 2026-04-09 00:54:31.917491 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917497 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917503 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917509 | orchestrator | 2026-04-09 00:54:31.917516 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-09 00:54:31.917523 | orchestrator | 2026-04-09 00:54:31.917530 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.917537 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:00.645) 0:04:59.990 ******** 2026-04-09 00:54:31.917544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917550 | orchestrator | 2026-04-09 00:54:31.917554 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.917559 | orchestrator | Thursday 09 April 2026 00:48:49 +0000 (0:00:00.409) 0:05:00.400 ******** 2026-04-09 00:54:31.917565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.917571 | orchestrator | 2026-04-09 00:54:31.917577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.917596 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.579) 0:05:00.979 ******** 2026-04-09 00:54:31.917601 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917606 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917610 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917615 | orchestrator | 2026-04-09 00:54:31.917620 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.917628 | orchestrator | Thursday 09 April 2026 00:48:50 +0000 (0:00:00.657) 0:05:01.636 ******** 2026-04-09 00:54:31.917632 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917637 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917642 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917647 | orchestrator | 2026-04-09 00:54:31.917651 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.917656 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.250) 0:05:01.887 ******** 2026-04-09 00:54:31.917659 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917666 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917670 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917674 | orchestrator | 2026-04-09 00:54:31.917678 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.917682 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.247) 0:05:02.134 ******** 2026-04-09 00:54:31.917686 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917690 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917694 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917698 | orchestrator | 2026-04-09 00:54:31.917702 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.917706 | orchestrator | Thursday 09 April 2026 00:48:51 +0000 (0:00:00.419) 0:05:02.554 ******** 2026-04-09 00:54:31.917709 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917713 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917717 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917721 | orchestrator | 2026-04-09 00:54:31.917725 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.917729 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:00.783) 0:05:03.337 ******** 2026-04-09 00:54:31.917733 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917737 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917741 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917745 | orchestrator | 2026-04-09 00:54:31.917749 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.917753 | orchestrator | Thursday 09 April 2026 00:48:52 +0000 (0:00:00.294) 0:05:03.632 ******** 2026-04-09 00:54:31.917757 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917761 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917765 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917769 | orchestrator | 2026-04-09 00:54:31.917772 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.917776 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:00.258) 0:05:03.890 ******** 2026-04-09 00:54:31.917780 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917784 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917788 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917792 | orchestrator | 2026-04-09 00:54:31.917796 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.917800 | orchestrator | Thursday 09 April 2026 00:48:53 +0000 (0:00:00.664) 0:05:04.554 ******** 2026-04-09 00:54:31.917804 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917808 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917812 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917816 | orchestrator | 2026-04-09 00:54:31.917820 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.917824 | orchestrator | Thursday 09 April 2026 00:48:54 +0000 (0:00:00.925) 0:05:05.479 ******** 2026-04-09 00:54:31.917828 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917831 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917835 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917839 | orchestrator | 2026-04-09 00:54:31.917843 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.917847 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.281) 0:05:05.761 ******** 2026-04-09 00:54:31.917854 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.917858 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.917861 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.917865 | orchestrator | 2026-04-09 00:54:31.917869 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.917873 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.298) 0:05:06.059 ******** 2026-04-09 00:54:31.917877 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917881 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917885 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917889 | orchestrator | 2026-04-09 00:54:31.917893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.917897 | orchestrator | Thursday 09 April 2026 00:48:55 +0000 (0:00:00.269) 0:05:06.329 ******** 2026-04-09 00:54:31.917901 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917905 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917909 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917913 | orchestrator | 2026-04-09 00:54:31.917917 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.917920 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.450) 0:05:06.780 ******** 2026-04-09 00:54:31.917924 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917928 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917932 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917938 | orchestrator | 2026-04-09 00:54:31.917945 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.917951 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.265) 0:05:07.045 ******** 2026-04-09 00:54:31.917956 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.917973 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.917979 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.917985 | orchestrator | 2026-04-09 00:54:31.917991 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.918059 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.253) 0:05:07.299 ******** 2026-04-09 00:54:31.918070 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918077 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918083 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918089 | orchestrator | 2026-04-09 00:54:31.918095 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.918102 | orchestrator | Thursday 09 April 2026 00:48:56 +0000 (0:00:00.260) 0:05:07.560 ******** 2026-04-09 00:54:31.918108 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.918114 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.918121 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918127 | orchestrator | 2026-04-09 00:54:31.918134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.918140 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:00.459) 0:05:08.020 ******** 2026-04-09 00:54:31.918146 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.918153 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.918164 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918168 | orchestrator | 2026-04-09 00:54:31.918173 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.918180 | orchestrator | Thursday 09 April 2026 00:48:57 +0000 (0:00:00.298) 0:05:08.319 ******** 2026-04-09 00:54:31.918186 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.918192 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.918198 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918205 | orchestrator | 2026-04-09 00:54:31.918212 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:54:31.918218 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:00.475) 0:05:08.795 ******** 2026-04-09 00:54:31.918225 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-09 00:54:31.918241 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.918248 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.918254 | orchestrator | 2026-04-09 00:54:31.918260 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-09 00:54:31.918264 | orchestrator | Thursday 09 April 2026 00:48:58 +0000 (0:00:00.788) 0:05:09.583 ******** 2026-04-09 00:54:31.918268 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.918275 | orchestrator | 2026-04-09 00:54:31.918282 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-09 00:54:31.918288 | orchestrator | Thursday 09 April 2026 00:48:59 +0000 (0:00:00.623) 0:05:10.206 ******** 2026-04-09 00:54:31.918294 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.918301 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.918308 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.918314 | orchestrator | 2026-04-09 00:54:31.918320 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-09 00:54:31.918327 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.657) 0:05:10.864 ******** 2026-04-09 00:54:31.918333 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918340 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918346 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918353 | orchestrator | 2026-04-09 00:54:31.918359 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-09 00:54:31.918366 | orchestrator | Thursday 09 April 2026 00:49:00 +0000 (0:00:00.280) 0:05:11.144 ******** 2026-04-09 00:54:31.918372 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.918379 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.918385 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.918392 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:54:31.918399 | orchestrator | 2026-04-09 00:54:31.918405 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-09 00:54:31.918412 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:09.816) 0:05:20.960 ******** 2026-04-09 00:54:31.918418 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.918424 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.918431 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918437 | orchestrator | 2026-04-09 00:54:31.918443 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-09 00:54:31.918449 | orchestrator | Thursday 09 April 2026 00:49:10 +0000 (0:00:00.472) 0:05:21.433 ******** 2026-04-09 00:54:31.918456 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:54:31.918462 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:54:31.918469 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:54:31.918475 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.918482 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.918488 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.918494 | orchestrator | 2026-04-09 00:54:31.918502 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:54:31.918506 | orchestrator | Thursday 09 April 2026 00:49:12 +0000 (0:00:02.240) 0:05:23.673 ******** 2026-04-09 00:54:31.918510 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-09 00:54:31.918514 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-09 00:54:31.918518 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-09 00:54:31.918522 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-09 00:54:31.918526 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-09 00:54:31.918530 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-09 00:54:31.918537 | orchestrator | 2026-04-09 00:54:31.918541 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-09 00:54:31.918545 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:01.105) 0:05:24.779 ******** 2026-04-09 00:54:31.918549 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.918553 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.918557 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918561 | orchestrator | 2026-04-09 00:54:31.918585 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-09 00:54:31.918590 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.532) 0:05:25.311 ******** 2026-04-09 00:54:31.918594 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918598 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918602 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918606 | orchestrator | 2026-04-09 00:54:31.918610 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-09 00:54:31.918614 | orchestrator | Thursday 09 April 2026 00:49:14 +0000 (0:00:00.408) 0:05:25.720 ******** 2026-04-09 00:54:31.918618 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918622 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918625 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918629 | orchestrator | 2026-04-09 00:54:31.918633 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-09 00:54:31.918640 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.244) 0:05:25.964 ******** 2026-04-09 00:54:31.918644 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.918648 | orchestrator | 2026-04-09 00:54:31.918652 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-09 00:54:31.918656 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.427) 0:05:26.391 ******** 2026-04-09 00:54:31.918660 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918664 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918668 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918672 | orchestrator | 2026-04-09 00:54:31.918676 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-09 00:54:31.918680 | orchestrator | Thursday 09 April 2026 00:49:15 +0000 (0:00:00.283) 0:05:26.674 ******** 2026-04-09 00:54:31.918683 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918687 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918691 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.918695 | orchestrator | 2026-04-09 00:54:31.918699 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-09 00:54:31.918703 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:00.409) 0:05:27.083 ******** 2026-04-09 00:54:31.918707 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.918711 | orchestrator | 2026-04-09 00:54:31.918715 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-09 00:54:31.918719 | orchestrator | Thursday 09 April 2026 00:49:16 +0000 (0:00:00.442) 0:05:27.525 ******** 2026-04-09 00:54:31.918723 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.918727 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.918732 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.918738 | orchestrator | 2026-04-09 00:54:31.918745 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-09 00:54:31.918751 | orchestrator | Thursday 09 April 2026 00:49:17 +0000 (0:00:01.073) 0:05:28.599 ******** 2026-04-09 00:54:31.918757 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.918764 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.918771 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.918778 | orchestrator | 2026-04-09 00:54:31.918785 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-09 00:54:31.918795 | orchestrator | Thursday 09 April 2026 00:49:19 +0000 (0:00:01.220) 0:05:29.819 ******** 2026-04-09 00:54:31.918799 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.918803 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.918807 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.918811 | orchestrator | 2026-04-09 00:54:31.918814 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-09 00:54:31.918819 | orchestrator | Thursday 09 April 2026 00:49:20 +0000 (0:00:01.493) 0:05:31.313 ******** 2026-04-09 00:54:31.918823 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.918826 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.918830 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.918834 | orchestrator | 2026-04-09 00:54:31.918838 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-09 00:54:31.918842 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:01.518) 0:05:32.832 ******** 2026-04-09 00:54:31.918846 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.918850 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.918854 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-09 00:54:31.918858 | orchestrator | 2026-04-09 00:54:31.918862 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-09 00:54:31.918866 | orchestrator | Thursday 09 April 2026 00:49:22 +0000 (0:00:00.388) 0:05:33.221 ******** 2026-04-09 00:54:31.918872 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-09 00:54:31.918878 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-09 00:54:31.918885 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-09 00:54:31.918891 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-09 00:54:31.918897 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-09 00:54:31.918903 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-04-09 00:54:31.918910 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.918916 | orchestrator | 2026-04-09 00:54:31.918923 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-09 00:54:31.918949 | orchestrator | Thursday 09 April 2026 00:49:58 +0000 (0:00:36.240) 0:06:09.462 ******** 2026-04-09 00:54:31.918954 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.918969 | orchestrator | 2026-04-09 00:54:31.918973 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-09 00:54:31.918978 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:01.643) 0:06:11.105 ******** 2026-04-09 00:54:31.918981 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.918986 | orchestrator | 2026-04-09 00:54:31.918992 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-09 00:54:31.918998 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.288) 0:06:11.393 ******** 2026-04-09 00:54:31.919004 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.919009 | orchestrator | 2026-04-09 00:54:31.919015 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-09 00:54:31.919026 | orchestrator | Thursday 09 April 2026 00:50:00 +0000 (0:00:00.119) 0:06:11.513 ******** 2026-04-09 00:54:31.919032 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-09 00:54:31.919039 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-09 00:54:31.919045 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-09 00:54:31.919051 | orchestrator | 2026-04-09 00:54:31.919057 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-09 00:54:31.919069 | orchestrator | Thursday 09 April 2026 00:50:07 +0000 (0:00:07.162) 0:06:18.675 ******** 2026-04-09 00:54:31.919076 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-09 00:54:31.919082 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-09 00:54:31.919089 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-09 00:54:31.919093 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-09 00:54:31.919097 | orchestrator | 2026-04-09 00:54:31.919101 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.919105 | orchestrator | Thursday 09 April 2026 00:50:12 +0000 (0:00:04.741) 0:06:23.416 ******** 2026-04-09 00:54:31.919109 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.919113 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.919116 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.919120 | orchestrator | 2026-04-09 00:54:31.919124 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-09 00:54:31.919128 | orchestrator | Thursday 09 April 2026 00:50:13 +0000 (0:00:00.853) 0:06:24.270 ******** 2026-04-09 00:54:31.919132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.919136 | orchestrator | 2026-04-09 00:54:31.919140 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-09 00:54:31.919144 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.481) 0:06:24.751 ******** 2026-04-09 00:54:31.919148 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.919152 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.919155 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.919160 | orchestrator | 2026-04-09 00:54:31.919164 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-09 00:54:31.919168 | orchestrator | Thursday 09 April 2026 00:50:14 +0000 (0:00:00.275) 0:06:25.027 ******** 2026-04-09 00:54:31.919172 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.919176 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.919180 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.919183 | orchestrator | 2026-04-09 00:54:31.919187 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-09 00:54:31.919191 | orchestrator | Thursday 09 April 2026 00:50:15 +0000 (0:00:01.340) 0:06:26.368 ******** 2026-04-09 00:54:31.919195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-09 00:54:31.919199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-09 00:54:31.919203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-09 00:54:31.919207 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.919211 | orchestrator | 2026-04-09 00:54:31.919215 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-09 00:54:31.919219 | orchestrator | Thursday 09 April 2026 00:50:16 +0000 (0:00:00.555) 0:06:26.923 ******** 2026-04-09 00:54:31.919223 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.919227 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.919231 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.919234 | orchestrator | 2026-04-09 00:54:31.919238 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-09 00:54:31.919242 | orchestrator | 2026-04-09 00:54:31.919246 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.919250 | orchestrator | Thursday 09 April 2026 00:50:16 +0000 (0:00:00.470) 0:06:27.394 ******** 2026-04-09 00:54:31.919254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.919258 | orchestrator | 2026-04-09 00:54:31.919262 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.919266 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.551) 0:06:27.946 ******** 2026-04-09 00:54:31.919273 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.919277 | orchestrator | 2026-04-09 00:54:31.919281 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.919284 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.469) 0:06:28.416 ******** 2026-04-09 00:54:31.919288 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919292 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919296 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919300 | orchestrator | 2026-04-09 00:54:31.919322 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.919326 | orchestrator | Thursday 09 April 2026 00:50:17 +0000 (0:00:00.269) 0:06:28.685 ******** 2026-04-09 00:54:31.919330 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919334 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919338 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919342 | orchestrator | 2026-04-09 00:54:31.919346 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.919350 | orchestrator | Thursday 09 April 2026 00:50:18 +0000 (0:00:00.916) 0:06:29.602 ******** 2026-04-09 00:54:31.919354 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919358 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919362 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919366 | orchestrator | 2026-04-09 00:54:31.919370 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.919376 | orchestrator | Thursday 09 April 2026 00:50:19 +0000 (0:00:00.706) 0:06:30.308 ******** 2026-04-09 00:54:31.919380 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919384 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919388 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919392 | orchestrator | 2026-04-09 00:54:31.919396 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.919400 | orchestrator | Thursday 09 April 2026 00:50:20 +0000 (0:00:00.641) 0:06:30.949 ******** 2026-04-09 00:54:31.919404 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919408 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919412 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919415 | orchestrator | 2026-04-09 00:54:31.919420 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.919423 | orchestrator | Thursday 09 April 2026 00:50:20 +0000 (0:00:00.254) 0:06:31.204 ******** 2026-04-09 00:54:31.919427 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919431 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919435 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919439 | orchestrator | 2026-04-09 00:54:31.919443 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.919447 | orchestrator | Thursday 09 April 2026 00:50:20 +0000 (0:00:00.389) 0:06:31.593 ******** 2026-04-09 00:54:31.919451 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919455 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919459 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919463 | orchestrator | 2026-04-09 00:54:31.919467 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.919471 | orchestrator | Thursday 09 April 2026 00:50:21 +0000 (0:00:00.258) 0:06:31.852 ******** 2026-04-09 00:54:31.919475 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919478 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919482 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919486 | orchestrator | 2026-04-09 00:54:31.919490 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.919494 | orchestrator | Thursday 09 April 2026 00:50:21 +0000 (0:00:00.674) 0:06:32.527 ******** 2026-04-09 00:54:31.919498 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919502 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919509 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919513 | orchestrator | 2026-04-09 00:54:31.919517 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.919521 | orchestrator | Thursday 09 April 2026 00:50:22 +0000 (0:00:00.672) 0:06:33.199 ******** 2026-04-09 00:54:31.919525 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919529 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919533 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919537 | orchestrator | 2026-04-09 00:54:31.919540 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.919544 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.581) 0:06:33.781 ******** 2026-04-09 00:54:31.919548 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919552 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919556 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919560 | orchestrator | 2026-04-09 00:54:31.919564 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.919568 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.287) 0:06:34.068 ******** 2026-04-09 00:54:31.919572 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919576 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919580 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919584 | orchestrator | 2026-04-09 00:54:31.919588 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.919592 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.302) 0:06:34.370 ******** 2026-04-09 00:54:31.919596 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919600 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919603 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919607 | orchestrator | 2026-04-09 00:54:31.919611 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.919615 | orchestrator | Thursday 09 April 2026 00:50:23 +0000 (0:00:00.322) 0:06:34.693 ******** 2026-04-09 00:54:31.919619 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919623 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919627 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919631 | orchestrator | 2026-04-09 00:54:31.919635 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.919639 | orchestrator | Thursday 09 April 2026 00:50:24 +0000 (0:00:00.601) 0:06:35.295 ******** 2026-04-09 00:54:31.919643 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919647 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919651 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919655 | orchestrator | 2026-04-09 00:54:31.919659 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.919663 | orchestrator | Thursday 09 April 2026 00:50:24 +0000 (0:00:00.303) 0:06:35.599 ******** 2026-04-09 00:54:31.919666 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919670 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919674 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919678 | orchestrator | 2026-04-09 00:54:31.919682 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.919689 | orchestrator | Thursday 09 April 2026 00:50:25 +0000 (0:00:00.293) 0:06:35.892 ******** 2026-04-09 00:54:31.919693 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919697 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919701 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919705 | orchestrator | 2026-04-09 00:54:31.919709 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.919713 | orchestrator | Thursday 09 April 2026 00:50:25 +0000 (0:00:00.284) 0:06:36.176 ******** 2026-04-09 00:54:31.919717 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919721 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919725 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919728 | orchestrator | 2026-04-09 00:54:31.919737 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.919741 | orchestrator | Thursday 09 April 2026 00:50:25 +0000 (0:00:00.558) 0:06:36.735 ******** 2026-04-09 00:54:31.919745 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919751 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919755 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919759 | orchestrator | 2026-04-09 00:54:31.919763 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-09 00:54:31.919767 | orchestrator | Thursday 09 April 2026 00:50:26 +0000 (0:00:00.520) 0:06:37.255 ******** 2026-04-09 00:54:31.919771 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919775 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919778 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919782 | orchestrator | 2026-04-09 00:54:31.919786 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-09 00:54:31.919790 | orchestrator | Thursday 09 April 2026 00:50:26 +0000 (0:00:00.300) 0:06:37.555 ******** 2026-04-09 00:54:31.919794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:54:31.919798 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:54:31.919802 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:54:31.919806 | orchestrator | 2026-04-09 00:54:31.919810 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-09 00:54:31.919814 | orchestrator | Thursday 09 April 2026 00:50:27 +0000 (0:00:00.883) 0:06:38.438 ******** 2026-04-09 00:54:31.919818 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.919822 | orchestrator | 2026-04-09 00:54:31.919825 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-09 00:54:31.919829 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:00.762) 0:06:39.201 ******** 2026-04-09 00:54:31.919833 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919837 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919841 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919845 | orchestrator | 2026-04-09 00:54:31.919849 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-09 00:54:31.919853 | orchestrator | Thursday 09 April 2026 00:50:28 +0000 (0:00:00.297) 0:06:39.499 ******** 2026-04-09 00:54:31.919857 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.919861 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.919865 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.919869 | orchestrator | 2026-04-09 00:54:31.919873 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-09 00:54:31.919877 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.289) 0:06:39.788 ******** 2026-04-09 00:54:31.919881 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919884 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919888 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919895 | orchestrator | 2026-04-09 00:54:31.919901 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-09 00:54:31.919907 | orchestrator | Thursday 09 April 2026 00:50:29 +0000 (0:00:00.919) 0:06:40.707 ******** 2026-04-09 00:54:31.919914 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.919919 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.919926 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.919932 | orchestrator | 2026-04-09 00:54:31.919939 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-09 00:54:31.919946 | orchestrator | Thursday 09 April 2026 00:50:30 +0000 (0:00:00.321) 0:06:41.029 ******** 2026-04-09 00:54:31.919952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:54:31.919989 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:54:31.919998 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:54:31.920002 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:54:31.920006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:54:31.920010 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:54:31.920014 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-09 00:54:31.920018 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:54:31.920022 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:54:31.920026 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-09 00:54:31.920030 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-09 00:54:31.920033 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:54:31.920045 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-09 00:54:31.920049 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:54:31.920053 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-09 00:54:31.920057 | orchestrator | 2026-04-09 00:54:31.920060 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-09 00:54:31.920064 | orchestrator | Thursday 09 April 2026 00:50:33 +0000 (0:00:03.366) 0:06:44.395 ******** 2026-04-09 00:54:31.920068 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920072 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920076 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920080 | orchestrator | 2026-04-09 00:54:31.920084 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-09 00:54:31.920090 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.365) 0:06:44.761 ******** 2026-04-09 00:54:31.920094 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.920098 | orchestrator | 2026-04-09 00:54:31.920102 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-09 00:54:31.920106 | orchestrator | Thursday 09 April 2026 00:50:34 +0000 (0:00:00.584) 0:06:45.345 ******** 2026-04-09 00:54:31.920110 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:54:31.920114 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:54:31.920118 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-09 00:54:31.920122 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-09 00:54:31.920126 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-09 00:54:31.920130 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-09 00:54:31.920134 | orchestrator | 2026-04-09 00:54:31.920138 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-09 00:54:31.920142 | orchestrator | Thursday 09 April 2026 00:50:35 +0000 (0:00:00.966) 0:06:46.311 ******** 2026-04-09 00:54:31.920146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.920150 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.920154 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.920158 | orchestrator | 2026-04-09 00:54:31.920162 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:54:31.920165 | orchestrator | Thursday 09 April 2026 00:50:37 +0000 (0:00:02.177) 0:06:48.489 ******** 2026-04-09 00:54:31.920173 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:54:31.920177 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.920181 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.920185 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:54:31.920188 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:54:31.920192 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.920205 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:54:31.920209 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:54:31.920213 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.920217 | orchestrator | 2026-04-09 00:54:31.920221 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-09 00:54:31.920225 | orchestrator | Thursday 09 April 2026 00:50:38 +0000 (0:00:01.199) 0:06:49.688 ******** 2026-04-09 00:54:31.920229 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.920233 | orchestrator | 2026-04-09 00:54:31.920237 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-09 00:54:31.920241 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:03.160) 0:06:52.848 ******** 2026-04-09 00:54:31.920245 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.920249 | orchestrator | 2026-04-09 00:54:31.920253 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-09 00:54:31.920257 | orchestrator | Thursday 09 April 2026 00:50:42 +0000 (0:00:00.539) 0:06:53.388 ******** 2026-04-09 00:54:31.920261 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e1b9ff7a-7324-53df-902d-27a5c0e1e380', 'data_vg': 'ceph-e1b9ff7a-7324-53df-902d-27a5c0e1e380'}) 2026-04-09 00:54:31.920267 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-bd7ebef9-c50f-5d78-8aca-8eab443ce24e', 'data_vg': 'ceph-bd7ebef9-c50f-5d78-8aca-8eab443ce24e'}) 2026-04-09 00:54:31.920271 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a7170513-cc74-5c6a-bf20-0648bd8fe211', 'data_vg': 'ceph-a7170513-cc74-5c6a-bf20-0648bd8fe211'}) 2026-04-09 00:54:31.920275 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c85b9e91-1f7c-51a1-92b9-1f1081da5c54', 'data_vg': 'ceph-c85b9e91-1f7c-51a1-92b9-1f1081da5c54'}) 2026-04-09 00:54:31.920279 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c145dd89-b6cf-5d58-ae96-f0c6197297d1', 'data_vg': 'ceph-c145dd89-b6cf-5d58-ae96-f0c6197297d1'}) 2026-04-09 00:54:31.920283 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b054f04d-2068-53f2-80e7-c9a997d8c167', 'data_vg': 'ceph-b054f04d-2068-53f2-80e7-c9a997d8c167'}) 2026-04-09 00:54:31.920287 | orchestrator | 2026-04-09 00:54:31.920291 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-09 00:54:31.920295 | orchestrator | Thursday 09 April 2026 00:51:22 +0000 (0:00:39.452) 0:07:32.840 ******** 2026-04-09 00:54:31.920302 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920306 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920310 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920317 | orchestrator | 2026-04-09 00:54:31.920321 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-09 00:54:31.920325 | orchestrator | Thursday 09 April 2026 00:51:22 +0000 (0:00:00.440) 0:07:33.281 ******** 2026-04-09 00:54:31.920329 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.920333 | orchestrator | 2026-04-09 00:54:31.920337 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-09 00:54:31.920341 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:00.469) 0:07:33.751 ******** 2026-04-09 00:54:31.920344 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.920348 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.920354 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.920363 | orchestrator | 2026-04-09 00:54:31.920367 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-09 00:54:31.920371 | orchestrator | Thursday 09 April 2026 00:51:23 +0000 (0:00:00.663) 0:07:34.414 ******** 2026-04-09 00:54:31.920375 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.920379 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.920383 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.920387 | orchestrator | 2026-04-09 00:54:31.920391 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-09 00:54:31.920395 | orchestrator | Thursday 09 April 2026 00:51:26 +0000 (0:00:02.730) 0:07:37.145 ******** 2026-04-09 00:54:31.920399 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.920403 | orchestrator | 2026-04-09 00:54:31.920407 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-09 00:54:31.920411 | orchestrator | Thursday 09 April 2026 00:51:26 +0000 (0:00:00.449) 0:07:37.595 ******** 2026-04-09 00:54:31.920415 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.920419 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.920423 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.920427 | orchestrator | 2026-04-09 00:54:31.920431 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-09 00:54:31.920434 | orchestrator | Thursday 09 April 2026 00:51:28 +0000 (0:00:01.229) 0:07:38.825 ******** 2026-04-09 00:54:31.920438 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.920442 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.920446 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.920450 | orchestrator | 2026-04-09 00:54:31.920454 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-09 00:54:31.920458 | orchestrator | Thursday 09 April 2026 00:51:29 +0000 (0:00:01.360) 0:07:40.185 ******** 2026-04-09 00:54:31.920462 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.920466 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.920470 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.920474 | orchestrator | 2026-04-09 00:54:31.920477 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-09 00:54:31.920481 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:01.751) 0:07:41.937 ******** 2026-04-09 00:54:31.920485 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920489 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920493 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920497 | orchestrator | 2026-04-09 00:54:31.920501 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-09 00:54:31.920505 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:00.313) 0:07:42.250 ******** 2026-04-09 00:54:31.920509 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920513 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920517 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920521 | orchestrator | 2026-04-09 00:54:31.920525 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-09 00:54:31.920529 | orchestrator | Thursday 09 April 2026 00:51:31 +0000 (0:00:00.315) 0:07:42.565 ******** 2026-04-09 00:54:31.920532 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-09 00:54:31.920536 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-09 00:54:31.920540 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-09 00:54:31.920544 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:54:31.920548 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-09 00:54:31.920552 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-09 00:54:31.920556 | orchestrator | 2026-04-09 00:54:31.920560 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-09 00:54:31.920564 | orchestrator | Thursday 09 April 2026 00:51:33 +0000 (0:00:01.353) 0:07:43.919 ******** 2026-04-09 00:54:31.920568 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-09 00:54:31.920575 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:54:31.920579 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-09 00:54:31.920583 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:54:31.920586 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:54:31.920590 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:54:31.920594 | orchestrator | 2026-04-09 00:54:31.920598 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-09 00:54:31.920602 | orchestrator | Thursday 09 April 2026 00:51:35 +0000 (0:00:02.201) 0:07:46.121 ******** 2026-04-09 00:54:31.920606 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-09 00:54:31.920610 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-09 00:54:31.920614 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-09 00:54:31.920618 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-09 00:54:31.920622 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-09 00:54:31.920626 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-09 00:54:31.920630 | orchestrator | 2026-04-09 00:54:31.920634 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-09 00:54:31.920638 | orchestrator | Thursday 09 April 2026 00:51:38 +0000 (0:00:03.495) 0:07:49.616 ******** 2026-04-09 00:54:31.920642 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920648 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920652 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.920656 | orchestrator | 2026-04-09 00:54:31.920660 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-09 00:54:31.920664 | orchestrator | Thursday 09 April 2026 00:51:41 +0000 (0:00:02.444) 0:07:52.061 ******** 2026-04-09 00:54:31.920668 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920672 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920676 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-09 00:54:31.920680 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.920684 | orchestrator | 2026-04-09 00:54:31.920688 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-09 00:54:31.920694 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:12.787) 0:08:04.848 ******** 2026-04-09 00:54:31.920698 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920702 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920706 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920710 | orchestrator | 2026-04-09 00:54:31.920714 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.920718 | orchestrator | Thursday 09 April 2026 00:51:54 +0000 (0:00:00.866) 0:08:05.714 ******** 2026-04-09 00:54:31.920721 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920725 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920729 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920733 | orchestrator | 2026-04-09 00:54:31.920737 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-09 00:54:31.920741 | orchestrator | Thursday 09 April 2026 00:51:55 +0000 (0:00:00.473) 0:08:06.187 ******** 2026-04-09 00:54:31.920745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.920749 | orchestrator | 2026-04-09 00:54:31.920753 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-09 00:54:31.920757 | orchestrator | Thursday 09 April 2026 00:51:55 +0000 (0:00:00.454) 0:08:06.642 ******** 2026-04-09 00:54:31.920761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.920765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.920769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.920772 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920781 | orchestrator | 2026-04-09 00:54:31.920785 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-09 00:54:31.920789 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:00.392) 0:08:07.035 ******** 2026-04-09 00:54:31.920793 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920797 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920800 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920804 | orchestrator | 2026-04-09 00:54:31.920808 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-09 00:54:31.920812 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:00.328) 0:08:07.364 ******** 2026-04-09 00:54:31.920816 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920820 | orchestrator | 2026-04-09 00:54:31.920824 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-09 00:54:31.920828 | orchestrator | Thursday 09 April 2026 00:51:56 +0000 (0:00:00.194) 0:08:07.558 ******** 2026-04-09 00:54:31.920832 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920836 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.920840 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.920844 | orchestrator | 2026-04-09 00:54:31.920848 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-09 00:54:31.920852 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.549) 0:08:08.108 ******** 2026-04-09 00:54:31.920855 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920859 | orchestrator | 2026-04-09 00:54:31.920863 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-09 00:54:31.920867 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.209) 0:08:08.318 ******** 2026-04-09 00:54:31.920871 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920875 | orchestrator | 2026-04-09 00:54:31.920879 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-09 00:54:31.920883 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.222) 0:08:08.541 ******** 2026-04-09 00:54:31.920887 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920891 | orchestrator | 2026-04-09 00:54:31.920895 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-09 00:54:31.920899 | orchestrator | Thursday 09 April 2026 00:51:57 +0000 (0:00:00.125) 0:08:08.666 ******** 2026-04-09 00:54:31.920903 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920907 | orchestrator | 2026-04-09 00:54:31.920912 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-09 00:54:31.920918 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:00.225) 0:08:08.892 ******** 2026-04-09 00:54:31.920924 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920931 | orchestrator | 2026-04-09 00:54:31.920938 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-09 00:54:31.920945 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:00.210) 0:08:09.102 ******** 2026-04-09 00:54:31.920951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.920969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.920975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.920981 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.920986 | orchestrator | 2026-04-09 00:54:31.920992 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-09 00:54:31.920997 | orchestrator | Thursday 09 April 2026 00:51:58 +0000 (0:00:00.401) 0:08:09.503 ******** 2026-04-09 00:54:31.921006 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921011 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921017 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921022 | orchestrator | 2026-04-09 00:54:31.921029 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-09 00:54:31.921033 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.298) 0:08:09.801 ******** 2026-04-09 00:54:31.921041 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921045 | orchestrator | 2026-04-09 00:54:31.921049 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-09 00:54:31.921053 | orchestrator | Thursday 09 April 2026 00:51:59 +0000 (0:00:00.782) 0:08:10.584 ******** 2026-04-09 00:54:31.921057 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921061 | orchestrator | 2026-04-09 00:54:31.921065 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-09 00:54:31.921069 | orchestrator | 2026-04-09 00:54:31.921075 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.921079 | orchestrator | Thursday 09 April 2026 00:52:00 +0000 (0:00:00.622) 0:08:11.206 ******** 2026-04-09 00:54:31.921084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.921089 | orchestrator | 2026-04-09 00:54:31.921092 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.921096 | orchestrator | Thursday 09 April 2026 00:52:01 +0000 (0:00:00.992) 0:08:12.199 ******** 2026-04-09 00:54:31.921100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.921104 | orchestrator | 2026-04-09 00:54:31.921108 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.921112 | orchestrator | Thursday 09 April 2026 00:52:02 +0000 (0:00:00.999) 0:08:13.199 ******** 2026-04-09 00:54:31.921116 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921120 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921124 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921128 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921132 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921136 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921140 | orchestrator | 2026-04-09 00:54:31.921143 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.921147 | orchestrator | Thursday 09 April 2026 00:52:03 +0000 (0:00:00.994) 0:08:14.194 ******** 2026-04-09 00:54:31.921152 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921158 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921164 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921169 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921179 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921189 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921202 | orchestrator | 2026-04-09 00:54:31.921208 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.921213 | orchestrator | Thursday 09 April 2026 00:52:04 +0000 (0:00:00.958) 0:08:15.153 ******** 2026-04-09 00:54:31.921219 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921225 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921230 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921236 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921242 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921248 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921254 | orchestrator | 2026-04-09 00:54:31.921268 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.921275 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.639) 0:08:15.792 ******** 2026-04-09 00:54:31.921282 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921288 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921293 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921300 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921306 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921312 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921318 | orchestrator | 2026-04-09 00:54:31.921324 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.921335 | orchestrator | Thursday 09 April 2026 00:52:05 +0000 (0:00:00.902) 0:08:16.695 ******** 2026-04-09 00:54:31.921341 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921347 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921353 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921359 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921365 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921372 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921378 | orchestrator | 2026-04-09 00:54:31.921384 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.921391 | orchestrator | Thursday 09 April 2026 00:52:06 +0000 (0:00:00.982) 0:08:17.677 ******** 2026-04-09 00:54:31.921397 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921401 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921404 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921408 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921412 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921416 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921420 | orchestrator | 2026-04-09 00:54:31.921424 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.921428 | orchestrator | Thursday 09 April 2026 00:52:07 +0000 (0:00:00.947) 0:08:18.625 ******** 2026-04-09 00:54:31.921432 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921436 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921440 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921444 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921448 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921452 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921456 | orchestrator | 2026-04-09 00:54:31.921460 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.921464 | orchestrator | Thursday 09 April 2026 00:52:08 +0000 (0:00:00.536) 0:08:19.161 ******** 2026-04-09 00:54:31.921473 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921477 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921481 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921484 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921488 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921492 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921498 | orchestrator | 2026-04-09 00:54:31.921504 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.921510 | orchestrator | Thursday 09 April 2026 00:52:09 +0000 (0:00:01.150) 0:08:20.312 ******** 2026-04-09 00:54:31.921516 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921522 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921528 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921534 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921541 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921547 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921553 | orchestrator | 2026-04-09 00:54:31.921559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.921570 | orchestrator | Thursday 09 April 2026 00:52:10 +0000 (0:00:00.935) 0:08:21.247 ******** 2026-04-09 00:54:31.921578 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921582 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921586 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921590 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921594 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921598 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921602 | orchestrator | 2026-04-09 00:54:31.921606 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.921610 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.612) 0:08:21.860 ******** 2026-04-09 00:54:31.921614 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921618 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921625 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921629 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921633 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921637 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921641 | orchestrator | 2026-04-09 00:54:31.921645 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.921649 | orchestrator | Thursday 09 April 2026 00:52:11 +0000 (0:00:00.509) 0:08:22.370 ******** 2026-04-09 00:54:31.921653 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921657 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921661 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921665 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921669 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921672 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921676 | orchestrator | 2026-04-09 00:54:31.921680 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.921684 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.667) 0:08:23.038 ******** 2026-04-09 00:54:31.921688 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921692 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921696 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921700 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921704 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921708 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921712 | orchestrator | 2026-04-09 00:54:31.921716 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.921719 | orchestrator | Thursday 09 April 2026 00:52:12 +0000 (0:00:00.510) 0:08:23.548 ******** 2026-04-09 00:54:31.921723 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921727 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921731 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921735 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921739 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921743 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921747 | orchestrator | 2026-04-09 00:54:31.921751 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.921755 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:00.653) 0:08:24.202 ******** 2026-04-09 00:54:31.921759 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921762 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921766 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921770 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921774 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921778 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921782 | orchestrator | 2026-04-09 00:54:31.921786 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.921790 | orchestrator | Thursday 09 April 2026 00:52:13 +0000 (0:00:00.502) 0:08:24.704 ******** 2026-04-09 00:54:31.921794 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921798 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921802 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921805 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:54:31.921809 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:54:31.921813 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:54:31.921817 | orchestrator | 2026-04-09 00:54:31.921821 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.921825 | orchestrator | Thursday 09 April 2026 00:52:14 +0000 (0:00:00.659) 0:08:25.364 ******** 2026-04-09 00:54:31.921829 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.921833 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.921837 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.921841 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921845 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921848 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921856 | orchestrator | 2026-04-09 00:54:31.921860 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.921864 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:00.517) 0:08:25.881 ******** 2026-04-09 00:54:31.921868 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921872 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921876 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921880 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921883 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921887 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921891 | orchestrator | 2026-04-09 00:54:31.921895 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.921899 | orchestrator | Thursday 09 April 2026 00:52:15 +0000 (0:00:00.847) 0:08:26.729 ******** 2026-04-09 00:54:31.921906 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.921910 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.921914 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.921918 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.921922 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.921926 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.921930 | orchestrator | 2026-04-09 00:54:31.921934 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-09 00:54:31.921938 | orchestrator | Thursday 09 April 2026 00:52:17 +0000 (0:00:01.242) 0:08:27.971 ******** 2026-04-09 00:54:31.921942 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.921946 | orchestrator | 2026-04-09 00:54:31.921950 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-09 00:54:31.921954 | orchestrator | Thursday 09 April 2026 00:52:21 +0000 (0:00:03.960) 0:08:31.931 ******** 2026-04-09 00:54:31.921972 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.921980 | orchestrator | 2026-04-09 00:54:31.921987 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-09 00:54:31.921991 | orchestrator | Thursday 09 April 2026 00:52:23 +0000 (0:00:01.935) 0:08:33.867 ******** 2026-04-09 00:54:31.921995 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.921999 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.922003 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.922007 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.922011 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.922038 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.922043 | orchestrator | 2026-04-09 00:54:31.922047 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-09 00:54:31.922051 | orchestrator | Thursday 09 April 2026 00:52:24 +0000 (0:00:01.391) 0:08:35.258 ******** 2026-04-09 00:54:31.922055 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.922058 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.922062 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.922066 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.922070 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.922074 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.922078 | orchestrator | 2026-04-09 00:54:31.922082 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-09 00:54:31.922086 | orchestrator | Thursday 09 April 2026 00:52:25 +0000 (0:00:01.180) 0:08:36.439 ******** 2026-04-09 00:54:31.922090 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.922095 | orchestrator | 2026-04-09 00:54:31.922099 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-09 00:54:31.922103 | orchestrator | Thursday 09 April 2026 00:52:26 +0000 (0:00:01.217) 0:08:37.657 ******** 2026-04-09 00:54:31.922107 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.922111 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.922115 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.922122 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.922126 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.922130 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.922134 | orchestrator | 2026-04-09 00:54:31.922138 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-09 00:54:31.922142 | orchestrator | Thursday 09 April 2026 00:52:28 +0000 (0:00:01.665) 0:08:39.322 ******** 2026-04-09 00:54:31.922146 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.922150 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.922154 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.922157 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.922161 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.922165 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.922169 | orchestrator | 2026-04-09 00:54:31.922173 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-09 00:54:31.922177 | orchestrator | Thursday 09 April 2026 00:52:31 +0000 (0:00:03.201) 0:08:42.524 ******** 2026-04-09 00:54:31.922181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:54:31.922185 | orchestrator | 2026-04-09 00:54:31.922189 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-09 00:54:31.922193 | orchestrator | Thursday 09 April 2026 00:52:32 +0000 (0:00:01.020) 0:08:43.545 ******** 2026-04-09 00:54:31.922197 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922201 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922205 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922209 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.922213 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.922216 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.922220 | orchestrator | 2026-04-09 00:54:31.922224 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-09 00:54:31.922228 | orchestrator | Thursday 09 April 2026 00:52:33 +0000 (0:00:00.518) 0:08:44.064 ******** 2026-04-09 00:54:31.922232 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.922236 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.922240 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.922244 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:54:31.922248 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:54:31.922252 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:54:31.922256 | orchestrator | 2026-04-09 00:54:31.922260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-09 00:54:31.922264 | orchestrator | Thursday 09 April 2026 00:52:35 +0000 (0:00:02.380) 0:08:46.444 ******** 2026-04-09 00:54:31.922268 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922272 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922276 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922280 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:54:31.922284 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:54:31.922287 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:54:31.922291 | orchestrator | 2026-04-09 00:54:31.922295 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-09 00:54:31.922299 | orchestrator | 2026-04-09 00:54:31.922303 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.922310 | orchestrator | Thursday 09 April 2026 00:52:36 +0000 (0:00:00.731) 0:08:47.175 ******** 2026-04-09 00:54:31.922314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.922318 | orchestrator | 2026-04-09 00:54:31.922322 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.922326 | orchestrator | Thursday 09 April 2026 00:52:37 +0000 (0:00:00.725) 0:08:47.901 ******** 2026-04-09 00:54:31.922330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.922337 | orchestrator | 2026-04-09 00:54:31.922341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.922350 | orchestrator | Thursday 09 April 2026 00:52:37 +0000 (0:00:00.464) 0:08:48.365 ******** 2026-04-09 00:54:31.922357 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922363 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922369 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922375 | orchestrator | 2026-04-09 00:54:31.922382 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.922388 | orchestrator | Thursday 09 April 2026 00:52:38 +0000 (0:00:00.460) 0:08:48.826 ******** 2026-04-09 00:54:31.922395 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922402 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922408 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922414 | orchestrator | 2026-04-09 00:54:31.922420 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.922424 | orchestrator | Thursday 09 April 2026 00:52:38 +0000 (0:00:00.677) 0:08:49.504 ******** 2026-04-09 00:54:31.922428 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922431 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922435 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922439 | orchestrator | 2026-04-09 00:54:31.922443 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.922447 | orchestrator | Thursday 09 April 2026 00:52:39 +0000 (0:00:00.620) 0:08:50.125 ******** 2026-04-09 00:54:31.922451 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922455 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922459 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922463 | orchestrator | 2026-04-09 00:54:31.922467 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.922471 | orchestrator | Thursday 09 April 2026 00:52:40 +0000 (0:00:00.632) 0:08:50.757 ******** 2026-04-09 00:54:31.922475 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922479 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922483 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922487 | orchestrator | 2026-04-09 00:54:31.922490 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.922494 | orchestrator | Thursday 09 April 2026 00:52:40 +0000 (0:00:00.454) 0:08:51.212 ******** 2026-04-09 00:54:31.922498 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922502 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922508 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922514 | orchestrator | 2026-04-09 00:54:31.922524 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.922534 | orchestrator | Thursday 09 April 2026 00:52:40 +0000 (0:00:00.343) 0:08:51.555 ******** 2026-04-09 00:54:31.922539 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922544 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922550 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922556 | orchestrator | 2026-04-09 00:54:31.922562 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.922568 | orchestrator | Thursday 09 April 2026 00:52:41 +0000 (0:00:00.292) 0:08:51.848 ******** 2026-04-09 00:54:31.922574 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922580 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922585 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922591 | orchestrator | 2026-04-09 00:54:31.922596 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.922601 | orchestrator | Thursday 09 April 2026 00:52:41 +0000 (0:00:00.676) 0:08:52.525 ******** 2026-04-09 00:54:31.922607 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922612 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922619 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922624 | orchestrator | 2026-04-09 00:54:31.922631 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.922642 | orchestrator | Thursday 09 April 2026 00:52:42 +0000 (0:00:00.818) 0:08:53.344 ******** 2026-04-09 00:54:31.922648 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922654 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922660 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922666 | orchestrator | 2026-04-09 00:54:31.922672 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.922678 | orchestrator | Thursday 09 April 2026 00:52:42 +0000 (0:00:00.263) 0:08:53.608 ******** 2026-04-09 00:54:31.922685 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922691 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922698 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922703 | orchestrator | 2026-04-09 00:54:31.922707 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.922711 | orchestrator | Thursday 09 April 2026 00:52:43 +0000 (0:00:00.273) 0:08:53.881 ******** 2026-04-09 00:54:31.922715 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922718 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922722 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922726 | orchestrator | 2026-04-09 00:54:31.922730 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.922734 | orchestrator | Thursday 09 April 2026 00:52:43 +0000 (0:00:00.292) 0:08:54.174 ******** 2026-04-09 00:54:31.922738 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922742 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922746 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922750 | orchestrator | 2026-04-09 00:54:31.922754 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.922763 | orchestrator | Thursday 09 April 2026 00:52:43 +0000 (0:00:00.447) 0:08:54.622 ******** 2026-04-09 00:54:31.922767 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922771 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922775 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922779 | orchestrator | 2026-04-09 00:54:31.922783 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.922787 | orchestrator | Thursday 09 April 2026 00:52:44 +0000 (0:00:00.299) 0:08:54.921 ******** 2026-04-09 00:54:31.922791 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922795 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922799 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922802 | orchestrator | 2026-04-09 00:54:31.922806 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.922810 | orchestrator | Thursday 09 April 2026 00:52:44 +0000 (0:00:00.269) 0:08:55.190 ******** 2026-04-09 00:54:31.922814 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922823 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922827 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922831 | orchestrator | 2026-04-09 00:54:31.922835 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.922839 | orchestrator | Thursday 09 April 2026 00:52:44 +0000 (0:00:00.280) 0:08:55.471 ******** 2026-04-09 00:54:31.922843 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922847 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922851 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922855 | orchestrator | 2026-04-09 00:54:31.922859 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.922863 | orchestrator | Thursday 09 April 2026 00:52:44 +0000 (0:00:00.266) 0:08:55.738 ******** 2026-04-09 00:54:31.922867 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922871 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922875 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922879 | orchestrator | 2026-04-09 00:54:31.922883 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.922887 | orchestrator | Thursday 09 April 2026 00:52:45 +0000 (0:00:00.479) 0:08:56.218 ******** 2026-04-09 00:54:31.922895 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.922899 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.922902 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.922906 | orchestrator | 2026-04-09 00:54:31.922910 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-09 00:54:31.922914 | orchestrator | Thursday 09 April 2026 00:52:45 +0000 (0:00:00.466) 0:08:56.684 ******** 2026-04-09 00:54:31.922918 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.922922 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.922926 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-09 00:54:31.922930 | orchestrator | 2026-04-09 00:54:31.922934 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-09 00:54:31.922938 | orchestrator | Thursday 09 April 2026 00:52:46 +0000 (0:00:00.500) 0:08:57.184 ******** 2026-04-09 00:54:31.922942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.922946 | orchestrator | 2026-04-09 00:54:31.922950 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-09 00:54:31.922954 | orchestrator | Thursday 09 April 2026 00:52:48 +0000 (0:00:02.088) 0:08:59.273 ******** 2026-04-09 00:54:31.922981 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-09 00:54:31.922987 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.922991 | orchestrator | 2026-04-09 00:54:31.922995 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-09 00:54:31.922999 | orchestrator | Thursday 09 April 2026 00:52:48 +0000 (0:00:00.210) 0:08:59.483 ******** 2026-04-09 00:54:31.923004 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:54:31.923012 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:54:31.923017 | orchestrator | 2026-04-09 00:54:31.923021 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-09 00:54:31.923025 | orchestrator | Thursday 09 April 2026 00:52:57 +0000 (0:00:08.803) 0:09:08.287 ******** 2026-04-09 00:54:31.923029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-09 00:54:31.923033 | orchestrator | 2026-04-09 00:54:31.923037 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-09 00:54:31.923041 | orchestrator | Thursday 09 April 2026 00:53:01 +0000 (0:00:03.516) 0:09:11.804 ******** 2026-04-09 00:54:31.923045 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923050 | orchestrator | 2026-04-09 00:54:31.923054 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-09 00:54:31.923058 | orchestrator | Thursday 09 April 2026 00:53:01 +0000 (0:00:00.500) 0:09:12.305 ******** 2026-04-09 00:54:31.923062 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:54:31.923066 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-09 00:54:31.923072 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:54:31.923077 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-09 00:54:31.923081 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-09 00:54:31.923088 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-09 00:54:31.923092 | orchestrator | 2026-04-09 00:54:31.923096 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-09 00:54:31.923100 | orchestrator | Thursday 09 April 2026 00:53:02 +0000 (0:00:01.226) 0:09:13.532 ******** 2026-04-09 00:54:31.923104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.923108 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.923114 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.923118 | orchestrator | 2026-04-09 00:54:31.923122 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:54:31.923126 | orchestrator | Thursday 09 April 2026 00:53:05 +0000 (0:00:02.211) 0:09:15.743 ******** 2026-04-09 00:54:31.923129 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:54:31.923133 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.923137 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923141 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:54:31.923145 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:54:31.923149 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923153 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:54:31.923157 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:54:31.923161 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923165 | orchestrator | 2026-04-09 00:54:31.923169 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-09 00:54:31.923173 | orchestrator | Thursday 09 April 2026 00:53:06 +0000 (0:00:01.178) 0:09:16.921 ******** 2026-04-09 00:54:31.923177 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923180 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923184 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923188 | orchestrator | 2026-04-09 00:54:31.923192 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-09 00:54:31.923196 | orchestrator | Thursday 09 April 2026 00:53:08 +0000 (0:00:02.595) 0:09:19.516 ******** 2026-04-09 00:54:31.923200 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923204 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923208 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923212 | orchestrator | 2026-04-09 00:54:31.923216 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-09 00:54:31.923220 | orchestrator | Thursday 09 April 2026 00:53:09 +0000 (0:00:00.521) 0:09:20.038 ******** 2026-04-09 00:54:31.923224 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923228 | orchestrator | 2026-04-09 00:54:31.923232 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-09 00:54:31.923236 | orchestrator | Thursday 09 April 2026 00:53:09 +0000 (0:00:00.545) 0:09:20.584 ******** 2026-04-09 00:54:31.923239 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923243 | orchestrator | 2026-04-09 00:54:31.923247 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-09 00:54:31.923251 | orchestrator | Thursday 09 April 2026 00:53:10 +0000 (0:00:00.688) 0:09:21.273 ******** 2026-04-09 00:54:31.923255 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923259 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923263 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923267 | orchestrator | 2026-04-09 00:54:31.923271 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-09 00:54:31.923275 | orchestrator | Thursday 09 April 2026 00:53:11 +0000 (0:00:01.255) 0:09:22.529 ******** 2026-04-09 00:54:31.923279 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923283 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923290 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923294 | orchestrator | 2026-04-09 00:54:31.923298 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-09 00:54:31.923302 | orchestrator | Thursday 09 April 2026 00:53:12 +0000 (0:00:01.136) 0:09:23.666 ******** 2026-04-09 00:54:31.923305 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923309 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923313 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923317 | orchestrator | 2026-04-09 00:54:31.923321 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-09 00:54:31.923325 | orchestrator | Thursday 09 April 2026 00:53:14 +0000 (0:00:01.662) 0:09:25.329 ******** 2026-04-09 00:54:31.923329 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923333 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923337 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923341 | orchestrator | 2026-04-09 00:54:31.923345 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-09 00:54:31.923349 | orchestrator | Thursday 09 April 2026 00:53:16 +0000 (0:00:02.083) 0:09:27.413 ******** 2026-04-09 00:54:31.923353 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923357 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923361 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923365 | orchestrator | 2026-04-09 00:54:31.923369 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.923373 | orchestrator | Thursday 09 April 2026 00:53:17 +0000 (0:00:01.222) 0:09:28.635 ******** 2026-04-09 00:54:31.923376 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923380 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923384 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923388 | orchestrator | 2026-04-09 00:54:31.923392 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-09 00:54:31.923399 | orchestrator | Thursday 09 April 2026 00:53:18 +0000 (0:00:00.923) 0:09:29.559 ******** 2026-04-09 00:54:31.923403 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923407 | orchestrator | 2026-04-09 00:54:31.923411 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-09 00:54:31.923415 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:00.497) 0:09:30.057 ******** 2026-04-09 00:54:31.923419 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923423 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923427 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923431 | orchestrator | 2026-04-09 00:54:31.923435 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-09 00:54:31.923439 | orchestrator | Thursday 09 April 2026 00:53:19 +0000 (0:00:00.302) 0:09:30.359 ******** 2026-04-09 00:54:31.923443 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.923449 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.923453 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.923457 | orchestrator | 2026-04-09 00:54:31.923461 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-09 00:54:31.923464 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:01.467) 0:09:31.827 ******** 2026-04-09 00:54:31.923468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.923472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.923476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.923480 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923484 | orchestrator | 2026-04-09 00:54:31.923488 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-09 00:54:31.923492 | orchestrator | Thursday 09 April 2026 00:53:21 +0000 (0:00:00.615) 0:09:32.442 ******** 2026-04-09 00:54:31.923496 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923500 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923507 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923511 | orchestrator | 2026-04-09 00:54:31.923515 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-09 00:54:31.923519 | orchestrator | 2026-04-09 00:54:31.923523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-09 00:54:31.923526 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.522) 0:09:32.965 ******** 2026-04-09 00:54:31.923530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923535 | orchestrator | 2026-04-09 00:54:31.923539 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-09 00:54:31.923542 | orchestrator | Thursday 09 April 2026 00:53:22 +0000 (0:00:00.698) 0:09:33.663 ******** 2026-04-09 00:54:31.923546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.923551 | orchestrator | 2026-04-09 00:54:31.923555 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-09 00:54:31.923558 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.535) 0:09:34.198 ******** 2026-04-09 00:54:31.923562 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923566 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923570 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923574 | orchestrator | 2026-04-09 00:54:31.923578 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-09 00:54:31.923582 | orchestrator | Thursday 09 April 2026 00:53:23 +0000 (0:00:00.281) 0:09:34.480 ******** 2026-04-09 00:54:31.923586 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923593 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923599 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923605 | orchestrator | 2026-04-09 00:54:31.923613 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-09 00:54:31.923623 | orchestrator | Thursday 09 April 2026 00:53:24 +0000 (0:00:01.001) 0:09:35.481 ******** 2026-04-09 00:54:31.923629 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923636 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923642 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923648 | orchestrator | 2026-04-09 00:54:31.923654 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-09 00:54:31.923660 | orchestrator | Thursday 09 April 2026 00:53:25 +0000 (0:00:00.706) 0:09:36.187 ******** 2026-04-09 00:54:31.923665 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923671 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923678 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923684 | orchestrator | 2026-04-09 00:54:31.923690 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-09 00:54:31.923694 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:00.713) 0:09:36.901 ******** 2026-04-09 00:54:31.923698 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923702 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923705 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923709 | orchestrator | 2026-04-09 00:54:31.923713 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-09 00:54:31.923717 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:00.289) 0:09:37.191 ******** 2026-04-09 00:54:31.923721 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923725 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923729 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923733 | orchestrator | 2026-04-09 00:54:31.923737 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-09 00:54:31.923741 | orchestrator | Thursday 09 April 2026 00:53:26 +0000 (0:00:00.541) 0:09:37.733 ******** 2026-04-09 00:54:31.923744 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923748 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923752 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923760 | orchestrator | 2026-04-09 00:54:31.923764 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-09 00:54:31.923769 | orchestrator | Thursday 09 April 2026 00:53:27 +0000 (0:00:00.302) 0:09:38.035 ******** 2026-04-09 00:54:31.923775 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923785 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923792 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923798 | orchestrator | 2026-04-09 00:54:31.923804 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-09 00:54:31.923810 | orchestrator | Thursday 09 April 2026 00:53:28 +0000 (0:00:00.746) 0:09:38.781 ******** 2026-04-09 00:54:31.923816 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923822 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923829 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923835 | orchestrator | 2026-04-09 00:54:31.923842 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-09 00:54:31.923848 | orchestrator | Thursday 09 April 2026 00:53:28 +0000 (0:00:00.726) 0:09:39.508 ******** 2026-04-09 00:54:31.923855 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923861 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923867 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923873 | orchestrator | 2026-04-09 00:54:31.923883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-09 00:54:31.923890 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:00.536) 0:09:40.044 ******** 2026-04-09 00:54:31.923895 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.923901 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.923907 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.923913 | orchestrator | 2026-04-09 00:54:31.923919 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-09 00:54:31.923926 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:00.335) 0:09:40.380 ******** 2026-04-09 00:54:31.923932 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923939 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923946 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.923952 | orchestrator | 2026-04-09 00:54:31.923969 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-09 00:54:31.923976 | orchestrator | Thursday 09 April 2026 00:53:29 +0000 (0:00:00.336) 0:09:40.716 ******** 2026-04-09 00:54:31.923983 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.923989 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.923995 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.924001 | orchestrator | 2026-04-09 00:54:31.924007 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-09 00:54:31.924014 | orchestrator | Thursday 09 April 2026 00:53:30 +0000 (0:00:00.326) 0:09:41.043 ******** 2026-04-09 00:54:31.924021 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.924027 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.924033 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.924040 | orchestrator | 2026-04-09 00:54:31.924046 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-09 00:54:31.924052 | orchestrator | Thursday 09 April 2026 00:53:30 +0000 (0:00:00.571) 0:09:41.614 ******** 2026-04-09 00:54:31.924056 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924059 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924063 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924067 | orchestrator | 2026-04-09 00:54:31.924071 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-09 00:54:31.924075 | orchestrator | Thursday 09 April 2026 00:53:31 +0000 (0:00:00.310) 0:09:41.925 ******** 2026-04-09 00:54:31.924079 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924085 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924091 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924100 | orchestrator | 2026-04-09 00:54:31.924107 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-09 00:54:31.924119 | orchestrator | Thursday 09 April 2026 00:53:31 +0000 (0:00:00.275) 0:09:42.201 ******** 2026-04-09 00:54:31.924125 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924132 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924138 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924145 | orchestrator | 2026-04-09 00:54:31.924150 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-09 00:54:31.924154 | orchestrator | Thursday 09 April 2026 00:53:31 +0000 (0:00:00.325) 0:09:42.526 ******** 2026-04-09 00:54:31.924158 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.924162 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.924166 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.924170 | orchestrator | 2026-04-09 00:54:31.924174 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-09 00:54:31.924178 | orchestrator | Thursday 09 April 2026 00:53:32 +0000 (0:00:00.586) 0:09:43.113 ******** 2026-04-09 00:54:31.924182 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.924186 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.924190 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.924194 | orchestrator | 2026-04-09 00:54:31.924197 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-09 00:54:31.924201 | orchestrator | Thursday 09 April 2026 00:53:32 +0000 (0:00:00.555) 0:09:43.668 ******** 2026-04-09 00:54:31.924205 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.924209 | orchestrator | 2026-04-09 00:54:31.924213 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:54:31.924217 | orchestrator | Thursday 09 April 2026 00:53:33 +0000 (0:00:00.748) 0:09:44.416 ******** 2026-04-09 00:54:31.924221 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924227 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.924233 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.924240 | orchestrator | 2026-04-09 00:54:31.924246 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:54:31.924252 | orchestrator | Thursday 09 April 2026 00:53:35 +0000 (0:00:02.297) 0:09:46.714 ******** 2026-04-09 00:54:31.924258 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:54:31.924264 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-09 00:54:31.924270 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.924276 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:54:31.924282 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-09 00:54:31.924288 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.924300 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:54:31.924306 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-09 00:54:31.924313 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.924319 | orchestrator | 2026-04-09 00:54:31.924325 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-09 00:54:31.924332 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:01.223) 0:09:47.937 ******** 2026-04-09 00:54:31.924338 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924344 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924350 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924356 | orchestrator | 2026-04-09 00:54:31.924362 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-09 00:54:31.924367 | orchestrator | Thursday 09 April 2026 00:53:37 +0000 (0:00:00.324) 0:09:48.261 ******** 2026-04-09 00:54:31.924378 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.924384 | orchestrator | 2026-04-09 00:54:31.924390 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-09 00:54:31.924402 | orchestrator | Thursday 09 April 2026 00:53:38 +0000 (0:00:00.808) 0:09:49.070 ******** 2026-04-09 00:54:31.924408 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924416 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924422 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924429 | orchestrator | 2026-04-09 00:54:31.924434 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-09 00:54:31.924441 | orchestrator | Thursday 09 April 2026 00:53:39 +0000 (0:00:00.833) 0:09:49.904 ******** 2026-04-09 00:54:31.924447 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924455 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:54:31.924460 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924466 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:54:31.924475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924483 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-09 00:54:31.924489 | orchestrator | 2026-04-09 00:54:31.924495 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-09 00:54:31.924501 | orchestrator | Thursday 09 April 2026 00:53:43 +0000 (0:00:04.541) 0:09:54.445 ******** 2026-04-09 00:54:31.924507 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924513 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.924520 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924526 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.924532 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:54:31.924539 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:54:31.924545 | orchestrator | 2026-04-09 00:54:31.924551 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-09 00:54:31.924557 | orchestrator | Thursday 09 April 2026 00:53:46 +0000 (0:00:02.321) 0:09:56.766 ******** 2026-04-09 00:54:31.924562 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-09 00:54:31.924569 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.924575 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-09 00:54:31.924581 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.924587 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-09 00:54:31.924594 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.924600 | orchestrator | 2026-04-09 00:54:31.924607 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-09 00:54:31.924613 | orchestrator | Thursday 09 April 2026 00:53:47 +0000 (0:00:01.492) 0:09:58.259 ******** 2026-04-09 00:54:31.924620 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-09 00:54:31.924626 | orchestrator | 2026-04-09 00:54:31.924633 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-09 00:54:31.924639 | orchestrator | Thursday 09 April 2026 00:53:47 +0000 (0:00:00.210) 0:09:58.469 ******** 2026-04-09 00:54:31.924646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924694 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924700 | orchestrator | 2026-04-09 00:54:31.924706 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-09 00:54:31.924713 | orchestrator | Thursday 09 April 2026 00:53:48 +0000 (0:00:00.610) 0:09:59.079 ******** 2026-04-09 00:54:31.924718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-09 00:54:31.924742 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924746 | orchestrator | 2026-04-09 00:54:31.924750 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-09 00:54:31.924754 | orchestrator | Thursday 09 April 2026 00:53:48 +0000 (0:00:00.610) 0:09:59.689 ******** 2026-04-09 00:54:31.924758 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:54:31.924762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:54:31.924766 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:54:31.924770 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:54:31.924774 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-09 00:54:31.924778 | orchestrator | 2026-04-09 00:54:31.924782 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-09 00:54:31.924786 | orchestrator | Thursday 09 April 2026 00:54:19 +0000 (0:00:30.770) 0:10:30.460 ******** 2026-04-09 00:54:31.924790 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924794 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924798 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924802 | orchestrator | 2026-04-09 00:54:31.924806 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-09 00:54:31.924810 | orchestrator | Thursday 09 April 2026 00:54:19 +0000 (0:00:00.277) 0:10:30.737 ******** 2026-04-09 00:54:31.924814 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.924818 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.924822 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.924831 | orchestrator | 2026-04-09 00:54:31.924835 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-09 00:54:31.924839 | orchestrator | Thursday 09 April 2026 00:54:20 +0000 (0:00:00.433) 0:10:31.170 ******** 2026-04-09 00:54:31.924843 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.924847 | orchestrator | 2026-04-09 00:54:31.924851 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-09 00:54:31.924855 | orchestrator | Thursday 09 April 2026 00:54:20 +0000 (0:00:00.486) 0:10:31.656 ******** 2026-04-09 00:54:31.924859 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.924863 | orchestrator | 2026-04-09 00:54:31.924867 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-09 00:54:31.924871 | orchestrator | Thursday 09 April 2026 00:54:21 +0000 (0:00:00.738) 0:10:32.395 ******** 2026-04-09 00:54:31.924875 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.924879 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.924883 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.924887 | orchestrator | 2026-04-09 00:54:31.924890 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-09 00:54:31.924894 | orchestrator | Thursday 09 April 2026 00:54:22 +0000 (0:00:01.158) 0:10:33.554 ******** 2026-04-09 00:54:31.924898 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.924902 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.924906 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.924910 | orchestrator | 2026-04-09 00:54:31.924914 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-09 00:54:31.924918 | orchestrator | Thursday 09 April 2026 00:54:23 +0000 (0:00:01.028) 0:10:34.582 ******** 2026-04-09 00:54:31.924922 | orchestrator | changed: [testbed-node-3] 2026-04-09 00:54:31.924929 | orchestrator | changed: [testbed-node-5] 2026-04-09 00:54:31.924935 | orchestrator | changed: [testbed-node-4] 2026-04-09 00:54:31.924941 | orchestrator | 2026-04-09 00:54:31.924947 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-09 00:54:31.924953 | orchestrator | Thursday 09 April 2026 00:54:25 +0000 (0:00:01.617) 0:10:36.200 ******** 2026-04-09 00:54:31.924970 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924977 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924991 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-09 00:54:31.924998 | orchestrator | 2026-04-09 00:54:31.925004 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-09 00:54:31.925011 | orchestrator | Thursday 09 April 2026 00:54:27 +0000 (0:00:02.403) 0:10:38.603 ******** 2026-04-09 00:54:31.925018 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.925022 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.925026 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.925030 | orchestrator | 2026-04-09 00:54:31.925034 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-09 00:54:31.925038 | orchestrator | Thursday 09 April 2026 00:54:28 +0000 (0:00:00.340) 0:10:38.944 ******** 2026-04-09 00:54:31.925042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:54:31.925046 | orchestrator | 2026-04-09 00:54:31.925050 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-09 00:54:31.925054 | orchestrator | Thursday 09 April 2026 00:54:28 +0000 (0:00:00.769) 0:10:39.713 ******** 2026-04-09 00:54:31.925059 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.925066 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.925081 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.925088 | orchestrator | 2026-04-09 00:54:31.925093 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-09 00:54:31.925099 | orchestrator | Thursday 09 April 2026 00:54:29 +0000 (0:00:00.314) 0:10:40.028 ******** 2026-04-09 00:54:31.925106 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.925111 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:54:31.925117 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:54:31.925124 | orchestrator | 2026-04-09 00:54:31.925130 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-09 00:54:31.925137 | orchestrator | Thursday 09 April 2026 00:54:29 +0000 (0:00:00.337) 0:10:40.365 ******** 2026-04-09 00:54:31.925143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:54:31.925150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:54:31.925157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:54:31.925161 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:54:31.925165 | orchestrator | 2026-04-09 00:54:31.925169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-09 00:54:31.925173 | orchestrator | Thursday 09 April 2026 00:54:30 +0000 (0:00:00.917) 0:10:41.282 ******** 2026-04-09 00:54:31.925176 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:54:31.925180 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:54:31.925184 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:54:31.925188 | orchestrator | 2026-04-09 00:54:31.925192 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:54:31.925196 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-09 00:54:31.925201 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-09 00:54:31.925205 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-09 00:54:31.925209 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-09 00:54:31.925213 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-09 00:54:31.925217 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-09 00:54:31.925222 | orchestrator | 2026-04-09 00:54:31.925229 | orchestrator | 2026-04-09 00:54:31.925235 | orchestrator | 2026-04-09 00:54:31.925241 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:54:31.925247 | orchestrator | Thursday 09 April 2026 00:54:31 +0000 (0:00:00.501) 0:10:41.784 ******** 2026-04-09 00:54:31.925254 | orchestrator | =============================================================================== 2026-04-09 00:54:31.925260 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.34s 2026-04-09 00:54:31.925266 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.45s 2026-04-09 00:54:31.925273 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.24s 2026-04-09 00:54:31.925279 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.77s 2026-04-09 00:54:31.925286 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.68s 2026-04-09 00:54:31.925297 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.04s 2026-04-09 00:54:31.925303 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2026-04-09 00:54:31.925310 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.82s 2026-04-09 00:54:31.925323 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.80s 2026-04-09 00:54:31.925327 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.08s 2026-04-09 00:54:31.925331 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.16s 2026-04-09 00:54:31.925335 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.37s 2026-04-09 00:54:31.925341 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.74s 2026-04-09 00:54:31.925351 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.54s 2026-04-09 00:54:31.925357 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.96s 2026-04-09 00:54:31.925363 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.52s 2026-04-09 00:54:31.925369 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.52s 2026-04-09 00:54:31.925376 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2026-04-09 00:54:31.925382 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.37s 2026-04-09 00:54:31.925389 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.20s 2026-04-09 00:54:31.925396 | orchestrator | 2026-04-09 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:34.959368 | orchestrator | 2026-04-09 00:54:34 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:34.959448 | orchestrator | 2026-04-09 00:54:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:34.959456 | orchestrator | 2026-04-09 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:38.000404 | orchestrator | 2026-04-09 00:54:38 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:38.024542 | orchestrator | 2026-04-09 00:54:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:38.024633 | orchestrator | 2026-04-09 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:41.052425 | orchestrator | 2026-04-09 00:54:41 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:41.054274 | orchestrator | 2026-04-09 00:54:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:41.054335 | orchestrator | 2026-04-09 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:44.100137 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:44.100698 | orchestrator | 2026-04-09 00:54:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:44.100715 | orchestrator | 2026-04-09 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:47.139597 | orchestrator | 2026-04-09 00:54:47 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:47.142265 | orchestrator | 2026-04-09 00:54:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:47.142318 | orchestrator | 2026-04-09 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:50.190988 | orchestrator | 2026-04-09 00:54:50 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:50.192142 | orchestrator | 2026-04-09 00:54:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:50.192245 | orchestrator | 2026-04-09 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:53.235277 | orchestrator | 2026-04-09 00:54:53 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:53.237680 | orchestrator | 2026-04-09 00:54:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:53.238543 | orchestrator | 2026-04-09 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:56.282319 | orchestrator | 2026-04-09 00:54:56 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:56.284197 | orchestrator | 2026-04-09 00:54:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:56.284239 | orchestrator | 2026-04-09 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:54:59.331734 | orchestrator | 2026-04-09 00:54:59 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:54:59.333222 | orchestrator | 2026-04-09 00:54:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:54:59.333267 | orchestrator | 2026-04-09 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:02.384204 | orchestrator | 2026-04-09 00:55:02 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:02.386438 | orchestrator | 2026-04-09 00:55:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:02.386584 | orchestrator | 2026-04-09 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:05.423491 | orchestrator | 2026-04-09 00:55:05 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:05.426098 | orchestrator | 2026-04-09 00:55:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:05.426179 | orchestrator | 2026-04-09 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:08.460338 | orchestrator | 2026-04-09 00:55:08 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:08.460450 | orchestrator | 2026-04-09 00:55:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:08.460465 | orchestrator | 2026-04-09 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:11.502573 | orchestrator | 2026-04-09 00:55:11 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:11.504389 | orchestrator | 2026-04-09 00:55:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:11.504577 | orchestrator | 2026-04-09 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:14.544513 | orchestrator | 2026-04-09 00:55:14 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:14.545797 | orchestrator | 2026-04-09 00:55:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:14.545833 | orchestrator | 2026-04-09 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:17.579420 | orchestrator | 2026-04-09 00:55:17 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:17.580212 | orchestrator | 2026-04-09 00:55:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:17.580251 | orchestrator | 2026-04-09 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:20.624773 | orchestrator | 2026-04-09 00:55:20 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:20.627608 | orchestrator | 2026-04-09 00:55:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:20.627655 | orchestrator | 2026-04-09 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:23.672175 | orchestrator | 2026-04-09 00:55:23 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:23.673851 | orchestrator | 2026-04-09 00:55:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:23.674099 | orchestrator | 2026-04-09 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:26.719835 | orchestrator | 2026-04-09 00:55:26 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:26.722316 | orchestrator | 2026-04-09 00:55:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:26.722361 | orchestrator | 2026-04-09 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:29.779716 | orchestrator | 2026-04-09 00:55:29 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:29.781542 | orchestrator | 2026-04-09 00:55:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:29.781582 | orchestrator | 2026-04-09 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:32.823985 | orchestrator | 2026-04-09 00:55:32 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:32.825363 | orchestrator | 2026-04-09 00:55:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:32.825407 | orchestrator | 2026-04-09 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:35.877860 | orchestrator | 2026-04-09 00:55:35 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:35.879908 | orchestrator | 2026-04-09 00:55:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:35.879963 | orchestrator | 2026-04-09 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:38.917632 | orchestrator | 2026-04-09 00:55:38 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:38.918543 | orchestrator | 2026-04-09 00:55:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:38.918585 | orchestrator | 2026-04-09 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:41.958396 | orchestrator | 2026-04-09 00:55:41 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:41.960521 | orchestrator | 2026-04-09 00:55:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:41.960628 | orchestrator | 2026-04-09 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:45.004178 | orchestrator | 2026-04-09 00:55:45 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:45.005367 | orchestrator | 2026-04-09 00:55:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:45.005392 | orchestrator | 2026-04-09 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:48.047074 | orchestrator | 2026-04-09 00:55:48 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:48.047204 | orchestrator | 2026-04-09 00:55:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:48.047218 | orchestrator | 2026-04-09 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:51.091803 | orchestrator | 2026-04-09 00:55:51 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:51.094229 | orchestrator | 2026-04-09 00:55:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:51.094328 | orchestrator | 2026-04-09 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:54.139062 | orchestrator | 2026-04-09 00:55:54 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:54.141061 | orchestrator | 2026-04-09 00:55:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:54.141804 | orchestrator | 2026-04-09 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:55:57.186429 | orchestrator | 2026-04-09 00:55:57 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:55:57.187975 | orchestrator | 2026-04-09 00:55:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:55:57.188319 | orchestrator | 2026-04-09 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:00.235539 | orchestrator | 2026-04-09 00:56:00 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:00.237749 | orchestrator | 2026-04-09 00:56:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:00.237799 | orchestrator | 2026-04-09 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:03.281441 | orchestrator | 2026-04-09 00:56:03 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:03.284010 | orchestrator | 2026-04-09 00:56:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:03.284069 | orchestrator | 2026-04-09 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:06.334932 | orchestrator | 2026-04-09 00:56:06 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:06.337078 | orchestrator | 2026-04-09 00:56:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:06.337134 | orchestrator | 2026-04-09 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:09.371117 | orchestrator | 2026-04-09 00:56:09 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:09.372673 | orchestrator | 2026-04-09 00:56:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:09.372711 | orchestrator | 2026-04-09 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:12.414053 | orchestrator | 2026-04-09 00:56:12 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:12.417154 | orchestrator | 2026-04-09 00:56:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:12.417205 | orchestrator | 2026-04-09 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:15.456187 | orchestrator | 2026-04-09 00:56:15 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:15.458326 | orchestrator | 2026-04-09 00:56:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:15.458469 | orchestrator | 2026-04-09 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:18.502959 | orchestrator | 2026-04-09 00:56:18 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:18.504893 | orchestrator | 2026-04-09 00:56:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:18.504975 | orchestrator | 2026-04-09 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:21.550292 | orchestrator | 2026-04-09 00:56:21 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:21.553080 | orchestrator | 2026-04-09 00:56:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:21.553138 | orchestrator | 2026-04-09 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:24.603059 | orchestrator | 2026-04-09 00:56:24 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:24.604693 | orchestrator | 2026-04-09 00:56:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:24.604738 | orchestrator | 2026-04-09 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:27.663329 | orchestrator | 2026-04-09 00:56:27 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:27.664487 | orchestrator | 2026-04-09 00:56:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:27.664603 | orchestrator | 2026-04-09 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:30.714464 | orchestrator | 2026-04-09 00:56:30 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:30.716637 | orchestrator | 2026-04-09 00:56:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:30.716704 | orchestrator | 2026-04-09 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:33.762672 | orchestrator | 2026-04-09 00:56:33 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:33.764241 | orchestrator | 2026-04-09 00:56:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:33.764411 | orchestrator | 2026-04-09 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:36.807694 | orchestrator | 2026-04-09 00:56:36 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state STARTED 2026-04-09 00:56:36.809185 | orchestrator | 2026-04-09 00:56:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:36.809211 | orchestrator | 2026-04-09 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:39.861074 | orchestrator | 2026-04-09 00:56:39 | INFO  | Task 775fe1d8-5d5e-4d16-94c2-456fcb6f33d0 is in state SUCCESS 2026-04-09 00:56:39.863008 | orchestrator | 2026-04-09 00:56:39.863054 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-09 00:56:39.863060 | orchestrator | 2.16.14 2026-04-09 00:56:39.863065 | orchestrator | 2026-04-09 00:56:39.863070 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-09 00:56:39.863075 | orchestrator | 2026-04-09 00:56:39.863079 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-09 00:56:39.863084 | orchestrator | Thursday 09 April 2026 00:54:35 +0000 (0:00:00.548) 0:00:00.548 ******** 2026-04-09 00:56:39.863088 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:39.863093 | orchestrator | 2026-04-09 00:56:39.863098 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-09 00:56:39.863102 | orchestrator | Thursday 09 April 2026 00:54:36 +0000 (0:00:00.592) 0:00:01.141 ******** 2026-04-09 00:56:39.863106 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863110 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863114 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863118 | orchestrator | 2026-04-09 00:56:39.863123 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-09 00:56:39.863127 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.964) 0:00:02.105 ******** 2026-04-09 00:56:39.863131 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863141 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863163 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863167 | orchestrator | 2026-04-09 00:56:39.863171 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-09 00:56:39.863175 | orchestrator | Thursday 09 April 2026 00:54:37 +0000 (0:00:00.295) 0:00:02.400 ******** 2026-04-09 00:56:39.863179 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863183 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863186 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863190 | orchestrator | 2026-04-09 00:56:39.863194 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-09 00:56:39.863198 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.834) 0:00:03.235 ******** 2026-04-09 00:56:39.863202 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863206 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863210 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863214 | orchestrator | 2026-04-09 00:56:39.863218 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-09 00:56:39.863222 | orchestrator | Thursday 09 April 2026 00:54:38 +0000 (0:00:00.303) 0:00:03.538 ******** 2026-04-09 00:56:39.863226 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863230 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863233 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863237 | orchestrator | 2026-04-09 00:56:39.863252 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-09 00:56:39.863256 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:00.295) 0:00:03.833 ******** 2026-04-09 00:56:39.863260 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863264 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863268 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863272 | orchestrator | 2026-04-09 00:56:39.863276 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-09 00:56:39.863279 | orchestrator | Thursday 09 April 2026 00:54:39 +0000 (0:00:00.336) 0:00:04.170 ******** 2026-04-09 00:56:39.863283 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.863288 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.863292 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.863296 | orchestrator | 2026-04-09 00:56:39.863300 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-09 00:56:39.863304 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.484) 0:00:04.654 ******** 2026-04-09 00:56:39.863308 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863312 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863316 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863320 | orchestrator | 2026-04-09 00:56:39.863323 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-09 00:56:39.863327 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.270) 0:00:04.925 ******** 2026-04-09 00:56:39.863331 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:56:39.863335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:39.863339 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:39.863343 | orchestrator | 2026-04-09 00:56:39.863347 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-09 00:56:39.863351 | orchestrator | Thursday 09 April 2026 00:54:40 +0000 (0:00:00.637) 0:00:05.562 ******** 2026-04-09 00:56:39.863355 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863359 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863363 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863367 | orchestrator | 2026-04-09 00:56:39.863371 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-09 00:56:39.863375 | orchestrator | Thursday 09 April 2026 00:54:41 +0000 (0:00:00.415) 0:00:05.977 ******** 2026-04-09 00:56:39.863379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:56:39.863387 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:39.863391 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:39.863394 | orchestrator | 2026-04-09 00:56:39.863398 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-09 00:56:39.863402 | orchestrator | Thursday 09 April 2026 00:54:44 +0000 (0:00:02.938) 0:00:08.916 ******** 2026-04-09 00:56:39.863406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:56:39.863411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:56:39.863415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:56:39.863419 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.863423 | orchestrator | 2026-04-09 00:56:39.863435 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-09 00:56:39.863439 | orchestrator | Thursday 09 April 2026 00:54:44 +0000 (0:00:00.381) 0:00:09.297 ******** 2026-04-09 00:56:39.863444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863458 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.863462 | orchestrator | 2026-04-09 00:56:39.863466 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-09 00:56:39.863470 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.783) 0:00:10.081 ******** 2026-04-09 00:56:39.863476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.863499 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.863503 | orchestrator | 2026-04-09 00:56:39.863507 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-09 00:56:39.863511 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.138) 0:00:10.220 ******** 2026-04-09 00:56:39.863516 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'dd8d070160c9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-09 00:54:42.345577', 'end': '2026-04-09 00:54:42.369204', 'delta': '0:00:00.023627', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dd8d070160c9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-09 00:56:39.863528 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a085292ffe8b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-09 00:54:43.425404', 'end': '2026-04-09 00:54:43.451239', 'delta': '0:00:00.025835', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a085292ffe8b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-09 00:56:39.863536 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '94e1bf544096', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-09 00:54:44.190921', 'end': '2026-04-09 00:54:44.218700', 'delta': '0:00:00.027779', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94e1bf544096'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-09 00:56:39.863541 | orchestrator | 2026-04-09 00:56:39.863545 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-09 00:56:39.863549 | orchestrator | Thursday 09 April 2026 00:54:45 +0000 (0:00:00.361) 0:00:10.581 ******** 2026-04-09 00:56:39.863553 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.863557 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.863560 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.863564 | orchestrator | 2026-04-09 00:56:39.863568 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-09 00:56:39.863572 | orchestrator | Thursday 09 April 2026 00:54:46 +0000 (0:00:00.418) 0:00:11.000 ******** 2026-04-09 00:56:39.863576 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-09 00:56:39.863581 | orchestrator | 2026-04-09 00:56:39.863584 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-09 00:56:39.863588 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:01.584) 0:00:12.585 ******** 2026-04-09 00:56:39.863592 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.863596 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.863600 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.863604 | orchestrator | 2026-04-09 00:56:39.863609 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-09 00:56:39.863613 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.348) 0:00:12.934 ******** 2026-04-09 00:56:39.863618 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864022 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864032 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864036 | orchestrator | 2026-04-09 00:56:39.864045 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:56:39.864050 | orchestrator | Thursday 09 April 2026 00:54:48 +0000 (0:00:00.408) 0:00:13.343 ******** 2026-04-09 00:56:39.864061 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864065 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864070 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864074 | orchestrator | 2026-04-09 00:56:39.864078 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-09 00:56:39.864082 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.471) 0:00:13.814 ******** 2026-04-09 00:56:39.864086 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.864090 | orchestrator | 2026-04-09 00:56:39.864094 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-09 00:56:39.864098 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.119) 0:00:13.934 ******** 2026-04-09 00:56:39.864102 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864106 | orchestrator | 2026-04-09 00:56:39.864110 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-09 00:56:39.864114 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.214) 0:00:14.148 ******** 2026-04-09 00:56:39.864118 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864122 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864126 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864130 | orchestrator | 2026-04-09 00:56:39.864134 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-09 00:56:39.864138 | orchestrator | Thursday 09 April 2026 00:54:49 +0000 (0:00:00.301) 0:00:14.449 ******** 2026-04-09 00:56:39.864141 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864145 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864149 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864153 | orchestrator | 2026-04-09 00:56:39.864157 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-09 00:56:39.864161 | orchestrator | Thursday 09 April 2026 00:54:50 +0000 (0:00:00.341) 0:00:14.791 ******** 2026-04-09 00:56:39.864165 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864169 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864173 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864177 | orchestrator | 2026-04-09 00:56:39.864181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-09 00:56:39.864185 | orchestrator | Thursday 09 April 2026 00:54:50 +0000 (0:00:00.500) 0:00:15.291 ******** 2026-04-09 00:56:39.864189 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864193 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864197 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864200 | orchestrator | 2026-04-09 00:56:39.864204 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-09 00:56:39.864208 | orchestrator | Thursday 09 April 2026 00:54:51 +0000 (0:00:00.309) 0:00:15.601 ******** 2026-04-09 00:56:39.864212 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864216 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864220 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864224 | orchestrator | 2026-04-09 00:56:39.864228 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-09 00:56:39.864232 | orchestrator | Thursday 09 April 2026 00:54:51 +0000 (0:00:00.297) 0:00:15.898 ******** 2026-04-09 00:56:39.864236 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864240 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864244 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864255 | orchestrator | 2026-04-09 00:56:39.864262 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-09 00:56:39.864424 | orchestrator | Thursday 09 April 2026 00:54:51 +0000 (0:00:00.308) 0:00:16.206 ******** 2026-04-09 00:56:39.864435 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864442 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864448 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864453 | orchestrator | 2026-04-09 00:56:39.864460 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-09 00:56:39.864473 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:00.505) 0:00:16.712 ******** 2026-04-09 00:56:39.864479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211', 'dm-uuid-LVM-pe55oqTM5WXSDzjYyzUaRaqd3CMpaNVKPFQtec6Hf7WfksPCkUvb70pUeW8Rn5uq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167', 'dm-uuid-LVM-L06u4HG1Z8VsVrmrPMttrHEynsdWY5tYPTFkVlcRUFzwwxtPZYElKrlXtNfRtW43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e', 'dm-uuid-LVM-DHlmD4zM6t0CAqLBKIqYSjRilxlYUBpjQoqaKbAkGOzIRpf04OLgwBCKB1uAEule'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1', 'dm-uuid-LVM-oPQcAC3b4g0q6IF1gfNDxqKrQQ0gRj9dgMIxCG22CyIBdyjBvFCda05eHhbShhTC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x2LgyY-pFsN-CRjH-fIff-VqZQ-iJC0-uuKqoj', 'scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b', 'scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sSH1Wc-rYYU-IHt9-clfm-yIKH-cWs0-4yx0l1', 'scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f', 'scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c', 'scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864656 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.864666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-php0qM-0Azd-kHee-TCGh-7MhG-Ev8e-m8IXL8', 'scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b', 'scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UvBVT8-BbxX-nFqu-R6Bp-Tkm7-HNbO-Iu1NbH', 'scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961', 'scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa', 'scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864742 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.864746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380', 'dm-uuid-LVM-SVjSV9dQLY9i6LNd8kn9mbDKHPRpFDjaLiY1sw7Qqhj8B5em5drNdfrIXRXrBJsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54', 'dm-uuid-LVM-wwJgOyu1pTIB1IcZ0ixOqWljpfUKPIqNdd43eLv3qBvIeVziSCvGKQMWUepC7KsH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-09 00:56:39.864825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hmNGQz-IaRB-JT1G-Ibaq-MHss-JZrN-2V2na8', 'scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554', 'scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZ7sNY-nhf9-sUdm-OQ93-lYqN-j4aB-cnxbMZ', 'scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02', 'scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6', 'scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-09 00:56:39.864868 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.864872 | orchestrator | 2026-04-09 00:56:39.864876 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-09 00:56:39.864881 | orchestrator | Thursday 09 April 2026 00:54:52 +0000 (0:00:00.621) 0:00:17.333 ******** 2026-04-09 00:56:39.864885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211', 'dm-uuid-LVM-pe55oqTM5WXSDzjYyzUaRaqd3CMpaNVKPFQtec6Hf7WfksPCkUvb70pUeW8Rn5uq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864894 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167', 'dm-uuid-LVM-L06u4HG1Z8VsVrmrPMttrHEynsdWY5tYPTFkVlcRUFzwwxtPZYElKrlXtNfRtW43'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e', 'dm-uuid-LVM-DHlmD4zM6t0CAqLBKIqYSjRilxlYUBpjQoqaKbAkGOzIRpf04OLgwBCKB1uAEule'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1', 'dm-uuid-LVM-oPQcAC3b4g0q6IF1gfNDxqKrQQ0gRj9dgMIxCG22CyIBdyjBvFCda05eHhbShhTC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16', 'scsi-SQEMU_QEMU_HARDDISK_74b5ef9f-7038-474f-83c8-72643aabc9bd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a7170513--cc74--5c6a--bf20--0648bd8fe211-osd--block--a7170513--cc74--5c6a--bf20--0648bd8fe211'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-x2LgyY-pFsN-CRjH-fIff-VqZQ-iJC0-uuKqoj', 'scsi-0QEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b', 'scsi-SQEMU_QEMU_HARDDISK_1117e366-620b-4195-b3cd-cb9d1ba2563b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b054f04d--2068--53f2--80e7--c9a997d8c167-osd--block--b054f04d--2068--53f2--80e7--c9a997d8c167'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sSH1Wc-rYYU-IHt9-clfm-yIKH-cWs0-4yx0l1', 'scsi-0QEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f', 'scsi-SQEMU_QEMU_HARDDISK_cc2e9d6e-928c-46c6-aaaa-26c6da7e313f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.864994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c', 'scsi-SQEMU_QEMU_HARDDISK_b113bf69-5b2f-465f-b4d6-8ed3709e703c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865015 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865030 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865036 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16', 'scsi-SQEMU_QEMU_HARDDISK_3a6d3317-2b94-4d3e-96ca-e5381511ebbc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--bd7ebef9--c50f--5d78--8aca--8eab443ce24e-osd--block--bd7ebef9--c50f--5d78--8aca--8eab443ce24e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-php0qM-0Azd-kHee-TCGh-7MhG-Ev8e-m8IXL8', 'scsi-0QEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b', 'scsi-SQEMU_QEMU_HARDDISK_a2730516-0b41-4086-99de-bfe7a2602e3b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380', 'dm-uuid-LVM-SVjSV9dQLY9i6LNd8kn9mbDKHPRpFDjaLiY1sw7Qqhj8B5em5drNdfrIXRXrBJsd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c145dd89--b6cf--5d58--ae96--f0c6197297d1-osd--block--c145dd89--b6cf--5d58--ae96--f0c6197297d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UvBVT8-BbxX-nFqu-R6Bp-Tkm7-HNbO-Iu1NbH', 'scsi-0QEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961', 'scsi-SQEMU_QEMU_HARDDISK_7d3f3539-bcc0-40e2-bb47-88465426d961'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92026-04-09 00:56:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:39.865081 | orchestrator | 2b9--1f1081da5c54', 'dm-uuid-LVM-wwJgOyu1pTIB1IcZ0ixOqWljpfUKPIqNdd43eLv3qBvIeVziSCvGKQMWUepC7KsH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa', 'scsi-SQEMU_QEMU_HARDDISK_78a0dd59-f7ff-4f21-9079-dceaea0538fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865102 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865106 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865111 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865157 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16', 'scsi-SQEMU_QEMU_HARDDISK_1ca5d9af-c9b0-4634-80a3-044251651961-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e1b9ff7a--7324--53df--902d--27a5c0e1e380-osd--block--e1b9ff7a--7324--53df--902d--27a5c0e1e380'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hmNGQz-IaRB-JT1G-Ibaq-MHss-JZrN-2V2na8', 'scsi-0QEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554', 'scsi-SQEMU_QEMU_HARDDISK_4915a96f-c727-49cd-8e71-365065423554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c85b9e91--1f7c--51a1--92b9--1f1081da5c54-osd--block--c85b9e91--1f7c--51a1--92b9--1f1081da5c54'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VZ7sNY-nhf9-sUdm-OQ93-lYqN-j4aB-cnxbMZ', 'scsi-0QEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02', 'scsi-SQEMU_QEMU_HARDDISK_de323fae-e08c-44ab-9f5d-e0649991af02'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6', 'scsi-SQEMU_QEMU_HARDDISK_0aa1a7f9-eb63-47f4-a3c4-c66e6167b3d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-09-00-03-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-09 00:56:39.865190 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865194 | orchestrator | 2026-04-09 00:56:39.865199 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-09 00:56:39.865203 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.580) 0:00:17.913 ******** 2026-04-09 00:56:39.865208 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.865212 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.865217 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.865221 | orchestrator | 2026-04-09 00:56:39.865226 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-09 00:56:39.865230 | orchestrator | Thursday 09 April 2026 00:54:53 +0000 (0:00:00.602) 0:00:18.516 ******** 2026-04-09 00:56:39.865235 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.865239 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.865244 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.865248 | orchestrator | 2026-04-09 00:56:39.865253 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:56:39.865260 | orchestrator | Thursday 09 April 2026 00:54:54 +0000 (0:00:00.471) 0:00:18.987 ******** 2026-04-09 00:56:39.865265 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.865269 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.865274 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.865278 | orchestrator | 2026-04-09 00:56:39.865282 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:56:39.865287 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.661) 0:00:19.648 ******** 2026-04-09 00:56:39.865292 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865296 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865301 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865305 | orchestrator | 2026-04-09 00:56:39.865309 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-09 00:56:39.865314 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.275) 0:00:19.923 ******** 2026-04-09 00:56:39.865319 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865325 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865330 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865335 | orchestrator | 2026-04-09 00:56:39.865339 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-09 00:56:39.865344 | orchestrator | Thursday 09 April 2026 00:54:55 +0000 (0:00:00.385) 0:00:20.309 ******** 2026-04-09 00:56:39.865348 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865353 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865357 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865362 | orchestrator | 2026-04-09 00:56:39.865366 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-09 00:56:39.865370 | orchestrator | Thursday 09 April 2026 00:54:56 +0000 (0:00:00.447) 0:00:20.757 ******** 2026-04-09 00:56:39.865375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-09 00:56:39.865380 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-09 00:56:39.865384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-09 00:56:39.865389 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-09 00:56:39.865393 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-09 00:56:39.865397 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-09 00:56:39.865402 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-09 00:56:39.865406 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-09 00:56:39.865411 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-09 00:56:39.865416 | orchestrator | 2026-04-09 00:56:39.865421 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-09 00:56:39.865425 | orchestrator | Thursday 09 April 2026 00:54:56 +0000 (0:00:00.800) 0:00:21.558 ******** 2026-04-09 00:56:39.865430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-09 00:56:39.865435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-09 00:56:39.865439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-09 00:56:39.865444 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865448 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-09 00:56:39.865452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-09 00:56:39.865456 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-09 00:56:39.865460 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-09 00:56:39.865467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-09 00:56:39.865471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-09 00:56:39.865475 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865479 | orchestrator | 2026-04-09 00:56:39.865483 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-09 00:56:39.865490 | orchestrator | Thursday 09 April 2026 00:54:57 +0000 (0:00:00.326) 0:00:21.885 ******** 2026-04-09 00:56:39.865494 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:56:39.865498 | orchestrator | 2026-04-09 00:56:39.865504 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-09 00:56:39.865509 | orchestrator | Thursday 09 April 2026 00:54:57 +0000 (0:00:00.646) 0:00:22.532 ******** 2026-04-09 00:56:39.865513 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865517 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865525 | orchestrator | 2026-04-09 00:56:39.865529 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-09 00:56:39.865533 | orchestrator | Thursday 09 April 2026 00:54:58 +0000 (0:00:00.313) 0:00:22.845 ******** 2026-04-09 00:56:39.865537 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865545 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865549 | orchestrator | 2026-04-09 00:56:39.865553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-09 00:56:39.865557 | orchestrator | Thursday 09 April 2026 00:54:58 +0000 (0:00:00.293) 0:00:23.139 ******** 2026-04-09 00:56:39.865560 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865564 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865568 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:56:39.865572 | orchestrator | 2026-04-09 00:56:39.865576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-09 00:56:39.865580 | orchestrator | Thursday 09 April 2026 00:54:58 +0000 (0:00:00.303) 0:00:23.442 ******** 2026-04-09 00:56:39.865584 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.865588 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.865592 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.865596 | orchestrator | 2026-04-09 00:56:39.865600 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-09 00:56:39.865603 | orchestrator | Thursday 09 April 2026 00:54:59 +0000 (0:00:00.570) 0:00:24.013 ******** 2026-04-09 00:56:39.865607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:39.865611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:39.865615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:39.865619 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865623 | orchestrator | 2026-04-09 00:56:39.865627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-09 00:56:39.865631 | orchestrator | Thursday 09 April 2026 00:54:59 +0000 (0:00:00.357) 0:00:24.370 ******** 2026-04-09 00:56:39.865635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:39.865639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:39.865643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:39.865647 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865650 | orchestrator | 2026-04-09 00:56:39.865657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-09 00:56:39.865661 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.353) 0:00:24.723 ******** 2026-04-09 00:56:39.865664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-09 00:56:39.865668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-09 00:56:39.865672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-09 00:56:39.865676 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865680 | orchestrator | 2026-04-09 00:56:39.865684 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-09 00:56:39.865692 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.368) 0:00:25.091 ******** 2026-04-09 00:56:39.865696 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:56:39.865700 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:56:39.865704 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:56:39.865708 | orchestrator | 2026-04-09 00:56:39.865712 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-09 00:56:39.865716 | orchestrator | Thursday 09 April 2026 00:55:00 +0000 (0:00:00.289) 0:00:25.381 ******** 2026-04-09 00:56:39.865720 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-09 00:56:39.865724 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-09 00:56:39.865727 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-09 00:56:39.865731 | orchestrator | 2026-04-09 00:56:39.865735 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-09 00:56:39.865739 | orchestrator | Thursday 09 April 2026 00:55:01 +0000 (0:00:00.469) 0:00:25.850 ******** 2026-04-09 00:56:39.865743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:56:39.865747 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:39.865751 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:39.865755 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:56:39.865759 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:56:39.865763 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:56:39.865767 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:56:39.865771 | orchestrator | 2026-04-09 00:56:39.865775 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-09 00:56:39.865779 | orchestrator | Thursday 09 April 2026 00:55:02 +0000 (0:00:00.899) 0:00:26.750 ******** 2026-04-09 00:56:39.865782 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-09 00:56:39.865786 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-09 00:56:39.865790 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-09 00:56:39.865794 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-09 00:56:39.865855 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-09 00:56:39.865861 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-09 00:56:39.865865 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-09 00:56:39.865869 | orchestrator | 2026-04-09 00:56:39.865872 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-09 00:56:39.865876 | orchestrator | Thursday 09 April 2026 00:55:03 +0000 (0:00:01.627) 0:00:28.378 ******** 2026-04-09 00:56:39.865880 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:56:39.865884 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:56:39.865888 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-09 00:56:39.865892 | orchestrator | 2026-04-09 00:56:39.865896 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-09 00:56:39.865900 | orchestrator | Thursday 09 April 2026 00:55:04 +0000 (0:00:00.316) 0:00:28.694 ******** 2026-04-09 00:56:39.865905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:39.865910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:39.865917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:39.865922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:39.865986 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-09 00:56:39.865997 | orchestrator | 2026-04-09 00:56:39.866001 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-09 00:56:39.866005 | orchestrator | Thursday 09 April 2026 00:55:46 +0000 (0:00:42.033) 0:01:10.728 ******** 2026-04-09 00:56:39.866010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866049 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866053 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866062 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866066 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866070 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-09 00:56:39.866074 | orchestrator | 2026-04-09 00:56:39.866078 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-09 00:56:39.866082 | orchestrator | Thursday 09 April 2026 00:56:09 +0000 (0:00:23.610) 0:01:34.339 ******** 2026-04-09 00:56:39.866086 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866090 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866094 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866098 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866110 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-09 00:56:39.866114 | orchestrator | 2026-04-09 00:56:39.866118 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-09 00:56:39.866122 | orchestrator | Thursday 09 April 2026 00:56:20 +0000 (0:00:10.924) 0:01:45.264 ******** 2026-04-09 00:56:39.866126 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866130 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866134 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866143 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866147 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866151 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866164 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866168 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866176 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866179 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866183 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866187 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866191 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-09 00:56:39.866199 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-09 00:56:39.866203 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-09 00:56:39.866207 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-09 00:56:39.866211 | orchestrator | 2026-04-09 00:56:39.866215 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:56:39.866219 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-09 00:56:39.866225 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-09 00:56:39.866229 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-09 00:56:39.866233 | orchestrator | 2026-04-09 00:56:39.866237 | orchestrator | 2026-04-09 00:56:39.866241 | orchestrator | 2026-04-09 00:56:39.866248 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:56:39.866252 | orchestrator | Thursday 09 April 2026 00:56:38 +0000 (0:00:17.785) 0:02:03.049 ******** 2026-04-09 00:56:39.866256 | orchestrator | =============================================================================== 2026-04-09 00:56:39.866260 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.03s 2026-04-09 00:56:39.866263 | orchestrator | generate keys ---------------------------------------------------------- 23.61s 2026-04-09 00:56:39.866268 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.79s 2026-04-09 00:56:39.866272 | orchestrator | get keys from monitors ------------------------------------------------- 10.92s 2026-04-09 00:56:39.866275 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.94s 2026-04-09 00:56:39.866279 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.63s 2026-04-09 00:56:39.866283 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.58s 2026-04-09 00:56:39.866287 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.96s 2026-04-09 00:56:39.866291 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.90s 2026-04-09 00:56:39.866295 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2026-04-09 00:56:39.866299 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.80s 2026-04-09 00:56:39.866303 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2026-04-09 00:56:39.866307 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2026-04-09 00:56:39.866311 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2026-04-09 00:56:39.866320 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-04-09 00:56:39.866324 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2026-04-09 00:56:39.866328 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.60s 2026-04-09 00:56:39.866332 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2026-04-09 00:56:39.866336 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-04-09 00:56:39.866340 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2026-04-09 00:56:39.866344 | orchestrator | 2026-04-09 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:42.912490 | orchestrator | 2026-04-09 00:56:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:42.913475 | orchestrator | 2026-04-09 00:56:42 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:42.913521 | orchestrator | 2026-04-09 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:45.953572 | orchestrator | 2026-04-09 00:56:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:45.955530 | orchestrator | 2026-04-09 00:56:45 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:45.955573 | orchestrator | 2026-04-09 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:48.997016 | orchestrator | 2026-04-09 00:56:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:48.998649 | orchestrator | 2026-04-09 00:56:48 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:48.998693 | orchestrator | 2026-04-09 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:52.049062 | orchestrator | 2026-04-09 00:56:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:52.049440 | orchestrator | 2026-04-09 00:56:52 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:52.049489 | orchestrator | 2026-04-09 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:55.096426 | orchestrator | 2026-04-09 00:56:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:55.097932 | orchestrator | 2026-04-09 00:56:55 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:55.098067 | orchestrator | 2026-04-09 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:56:58.145361 | orchestrator | 2026-04-09 00:56:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:56:58.145934 | orchestrator | 2026-04-09 00:56:58 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:56:58.145989 | orchestrator | 2026-04-09 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:01.217639 | orchestrator | 2026-04-09 00:57:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:01.219594 | orchestrator | 2026-04-09 00:57:01 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:57:01.219721 | orchestrator | 2026-04-09 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:04.273630 | orchestrator | 2026-04-09 00:57:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:04.276066 | orchestrator | 2026-04-09 00:57:04 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:57:04.276139 | orchestrator | 2026-04-09 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:07.323257 | orchestrator | 2026-04-09 00:57:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:07.324960 | orchestrator | 2026-04-09 00:57:07 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:57:07.325035 | orchestrator | 2026-04-09 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:10.367633 | orchestrator | 2026-04-09 00:57:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:10.369055 | orchestrator | 2026-04-09 00:57:10 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:57:10.369125 | orchestrator | 2026-04-09 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:13.422241 | orchestrator | 2026-04-09 00:57:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:13.423906 | orchestrator | 2026-04-09 00:57:13 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state STARTED 2026-04-09 00:57:13.423959 | orchestrator | 2026-04-09 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:16.469890 | orchestrator | 2026-04-09 00:57:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:16.472005 | orchestrator | 2026-04-09 00:57:16 | INFO  | Task 261c08ac-c758-4042-8e09-6d9fdbcbb185 is in state SUCCESS 2026-04-09 00:57:16.472068 | orchestrator | 2026-04-09 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:19.525059 | orchestrator | 2026-04-09 00:57:19 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:19.526657 | orchestrator | 2026-04-09 00:57:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:19.526724 | orchestrator | 2026-04-09 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:22.571838 | orchestrator | 2026-04-09 00:57:22 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:22.573892 | orchestrator | 2026-04-09 00:57:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:22.573933 | orchestrator | 2026-04-09 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:25.618430 | orchestrator | 2026-04-09 00:57:25 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:25.621462 | orchestrator | 2026-04-09 00:57:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:25.621515 | orchestrator | 2026-04-09 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:28.652065 | orchestrator | 2026-04-09 00:57:28 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:28.653096 | orchestrator | 2026-04-09 00:57:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:28.653150 | orchestrator | 2026-04-09 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:31.705329 | orchestrator | 2026-04-09 00:57:31 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:31.707161 | orchestrator | 2026-04-09 00:57:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:31.707221 | orchestrator | 2026-04-09 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:34.753411 | orchestrator | 2026-04-09 00:57:34 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:34.754970 | orchestrator | 2026-04-09 00:57:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:34.755099 | orchestrator | 2026-04-09 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:37.808960 | orchestrator | 2026-04-09 00:57:37 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:37.810093 | orchestrator | 2026-04-09 00:57:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:37.810144 | orchestrator | 2026-04-09 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:40.860638 | orchestrator | 2026-04-09 00:57:40 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:40.863015 | orchestrator | 2026-04-09 00:57:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:40.863087 | orchestrator | 2026-04-09 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:43.907276 | orchestrator | 2026-04-09 00:57:43 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:43.908687 | orchestrator | 2026-04-09 00:57:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:43.908815 | orchestrator | 2026-04-09 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:46.948438 | orchestrator | 2026-04-09 00:57:46 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:46.950826 | orchestrator | 2026-04-09 00:57:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:46.950866 | orchestrator | 2026-04-09 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:49.995081 | orchestrator | 2026-04-09 00:57:49 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:49.997603 | orchestrator | 2026-04-09 00:57:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:49.997654 | orchestrator | 2026-04-09 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:53.047004 | orchestrator | 2026-04-09 00:57:53 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:53.048184 | orchestrator | 2026-04-09 00:57:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:53.048238 | orchestrator | 2026-04-09 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:56.089196 | orchestrator | 2026-04-09 00:57:56 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:56.089973 | orchestrator | 2026-04-09 00:57:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:56.090102 | orchestrator | 2026-04-09 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:57:59.140190 | orchestrator | 2026-04-09 00:57:59 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:57:59.141829 | orchestrator | 2026-04-09 00:57:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:57:59.141866 | orchestrator | 2026-04-09 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:02.189409 | orchestrator | 2026-04-09 00:58:02 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:58:02.191897 | orchestrator | 2026-04-09 00:58:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:02.191981 | orchestrator | 2026-04-09 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:05.237615 | orchestrator | 2026-04-09 00:58:05 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:58:05.239818 | orchestrator | 2026-04-09 00:58:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:05.239879 | orchestrator | 2026-04-09 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:08.278871 | orchestrator | 2026-04-09 00:58:08 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:58:08.278965 | orchestrator | 2026-04-09 00:58:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:08.278975 | orchestrator | 2026-04-09 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:11.342273 | orchestrator | 2026-04-09 00:58:11 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:58:11.343615 | orchestrator | 2026-04-09 00:58:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:11.343673 | orchestrator | 2026-04-09 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:14.386646 | orchestrator | 2026-04-09 00:58:14 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state STARTED 2026-04-09 00:58:14.386784 | orchestrator | 2026-04-09 00:58:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:14.386809 | orchestrator | 2026-04-09 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:17.433557 | orchestrator | 2026-04-09 00:58:17 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:17.435404 | orchestrator | 2026-04-09 00:58:17.435464 | orchestrator | 2026-04-09 00:58:17.435477 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-09 00:58:17.435487 | orchestrator | 2026-04-09 00:58:17.435495 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-09 00:58:17.435504 | orchestrator | Thursday 09 April 2026 00:56:41 +0000 (0:00:00.224) 0:00:00.224 ******** 2026-04-09 00:58:17.435513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:58:17.435524 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435532 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435541 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:58:17.435551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435560 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:58:17.435568 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:58:17.435578 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:58:17.435587 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:58:17.435596 | orchestrator | 2026-04-09 00:58:17.435605 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-09 00:58:17.435614 | orchestrator | Thursday 09 April 2026 00:56:46 +0000 (0:00:04.829) 0:00:05.053 ******** 2026-04-09 00:58:17.435622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-09 00:58:17.435631 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435640 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435648 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:58:17.435677 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435854 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-09 00:58:17.435863 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-09 00:58:17.435870 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:58:17.435876 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-09 00:58:17.435881 | orchestrator | 2026-04-09 00:58:17.435887 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-09 00:58:17.435893 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:04.248) 0:00:09.301 ******** 2026-04-09 00:58:17.435900 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-09 00:58:17.435906 | orchestrator | 2026-04-09 00:58:17.435911 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-09 00:58:17.435925 | orchestrator | Thursday 09 April 2026 00:56:51 +0000 (0:00:00.871) 0:00:10.172 ******** 2026-04-09 00:58:17.435932 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-09 00:58:17.435939 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435945 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435951 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:58:17.435958 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.435964 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-09 00:58:17.436011 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-09 00:58:17.436017 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:58:17.436024 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-09 00:58:17.436030 | orchestrator | 2026-04-09 00:58:17.436036 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-09 00:58:17.436042 | orchestrator | Thursday 09 April 2026 00:57:05 +0000 (0:00:13.792) 0:00:23.965 ******** 2026-04-09 00:58:17.436049 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-09 00:58:17.436055 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-09 00:58:17.436074 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:58:17.436080 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-09 00:58:17.436100 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:58:17.436106 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-09 00:58:17.436115 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-09 00:58:17.436124 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-09 00:58:17.436132 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-09 00:58:17.436141 | orchestrator | 2026-04-09 00:58:17.436149 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-09 00:58:17.436158 | orchestrator | Thursday 09 April 2026 00:57:09 +0000 (0:00:03.397) 0:00:27.363 ******** 2026-04-09 00:58:17.436168 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-09 00:58:17.436188 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.436198 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.436207 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-09 00:58:17.436216 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-09 00:58:17.436225 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-09 00:58:17.436235 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-09 00:58:17.436244 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-09 00:58:17.436308 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-09 00:58:17.436317 | orchestrator | 2026-04-09 00:58:17.436322 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:58:17.436328 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-09 00:58:17.436336 | orchestrator | 2026-04-09 00:58:17.436342 | orchestrator | 2026-04-09 00:58:17.436347 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:58:17.436353 | orchestrator | Thursday 09 April 2026 00:57:15 +0000 (0:00:06.814) 0:00:34.177 ******** 2026-04-09 00:58:17.436359 | orchestrator | =============================================================================== 2026-04-09 00:58:17.436365 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.79s 2026-04-09 00:58:17.436371 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.81s 2026-04-09 00:58:17.436376 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.83s 2026-04-09 00:58:17.436382 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.25s 2026-04-09 00:58:17.436388 | orchestrator | Check if target directories exist --------------------------------------- 3.40s 2026-04-09 00:58:17.436394 | orchestrator | Create share directory -------------------------------------------------- 0.87s 2026-04-09 00:58:17.436399 | orchestrator | 2026-04-09 00:58:17.436408 | orchestrator | 2026-04-09 00:58:17.436417 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-09 00:58:17.436426 | orchestrator | 2026-04-09 00:58:17.436436 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-09 00:58:17.436443 | orchestrator | Thursday 09 April 2026 00:57:19 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-04-09 00:58:17.436475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-09 00:58:17.436487 | orchestrator | 2026-04-09 00:58:17.436497 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-09 00:58:17.436507 | orchestrator | Thursday 09 April 2026 00:57:19 +0000 (0:00:00.216) 0:00:00.513 ******** 2026-04-09 00:58:17.436515 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-09 00:58:17.436522 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-09 00:58:17.436529 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-09 00:58:17.436535 | orchestrator | 2026-04-09 00:58:17.436542 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-09 00:58:17.436548 | orchestrator | Thursday 09 April 2026 00:57:21 +0000 (0:00:01.641) 0:00:02.155 ******** 2026-04-09 00:58:17.436555 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-09 00:58:17.436561 | orchestrator | 2026-04-09 00:58:17.436568 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-09 00:58:17.436575 | orchestrator | Thursday 09 April 2026 00:57:22 +0000 (0:00:01.117) 0:00:03.273 ******** 2026-04-09 00:58:17.436582 | orchestrator | changed: [testbed-manager] 2026-04-09 00:58:17.436595 | orchestrator | 2026-04-09 00:58:17.436601 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-09 00:58:17.436609 | orchestrator | Thursday 09 April 2026 00:57:23 +0000 (0:00:00.886) 0:00:04.159 ******** 2026-04-09 00:58:17.436617 | orchestrator | changed: [testbed-manager] 2026-04-09 00:58:17.436627 | orchestrator | 2026-04-09 00:58:17.436636 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-09 00:58:17.436646 | orchestrator | Thursday 09 April 2026 00:57:24 +0000 (0:00:00.844) 0:00:05.003 ******** 2026-04-09 00:58:17.436657 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-09 00:58:17.436663 | orchestrator | ok: [testbed-manager] 2026-04-09 00:58:17.436669 | orchestrator | 2026-04-09 00:58:17.436675 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-09 00:58:17.436706 | orchestrator | Thursday 09 April 2026 00:58:05 +0000 (0:00:41.664) 0:00:46.668 ******** 2026-04-09 00:58:17.436716 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-09 00:58:17.436725 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-09 00:58:17.436734 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-09 00:58:17.436742 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:58:17.436752 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-09 00:58:17.436761 | orchestrator | 2026-04-09 00:58:17.436771 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-09 00:58:17.436778 | orchestrator | Thursday 09 April 2026 00:58:09 +0000 (0:00:03.690) 0:00:50.358 ******** 2026-04-09 00:58:17.436784 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-09 00:58:17.436789 | orchestrator | 2026-04-09 00:58:17.436795 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-09 00:58:17.436801 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.584) 0:00:50.942 ******** 2026-04-09 00:58:17.436806 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:58:17.436812 | orchestrator | 2026-04-09 00:58:17.436818 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-09 00:58:17.436823 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.120) 0:00:51.063 ******** 2026-04-09 00:58:17.436829 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:58:17.436835 | orchestrator | 2026-04-09 00:58:17.436840 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-09 00:58:17.436920 | orchestrator | Thursday 09 April 2026 00:58:10 +0000 (0:00:00.317) 0:00:51.380 ******** 2026-04-09 00:58:17.436928 | orchestrator | changed: [testbed-manager] 2026-04-09 00:58:17.436934 | orchestrator | 2026-04-09 00:58:17.436941 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-09 00:58:17.436950 | orchestrator | Thursday 09 April 2026 00:58:11 +0000 (0:00:01.361) 0:00:52.741 ******** 2026-04-09 00:58:17.436960 | orchestrator | changed: [testbed-manager] 2026-04-09 00:58:17.436969 | orchestrator | 2026-04-09 00:58:17.436978 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-09 00:58:17.436988 | orchestrator | Thursday 09 April 2026 00:58:12 +0000 (0:00:00.693) 0:00:53.435 ******** 2026-04-09 00:58:17.436998 | orchestrator | changed: [testbed-manager] 2026-04-09 00:58:17.437008 | orchestrator | 2026-04-09 00:58:17.437015 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-09 00:58:17.437020 | orchestrator | Thursday 09 April 2026 00:58:13 +0000 (0:00:00.564) 0:00:53.999 ******** 2026-04-09 00:58:17.437026 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-09 00:58:17.437032 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-09 00:58:17.437037 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-09 00:58:17.437043 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-09 00:58:17.437049 | orchestrator | 2026-04-09 00:58:17.437054 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:58:17.437071 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-09 00:58:17.437080 | orchestrator | 2026-04-09 00:58:17.437090 | orchestrator | 2026-04-09 00:58:17.437099 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:58:17.437109 | orchestrator | Thursday 09 April 2026 00:58:14 +0000 (0:00:01.471) 0:00:55.470 ******** 2026-04-09 00:58:17.437117 | orchestrator | =============================================================================== 2026-04-09 00:58:17.437123 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.66s 2026-04-09 00:58:17.437128 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.69s 2026-04-09 00:58:17.437134 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.64s 2026-04-09 00:58:17.437140 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2026-04-09 00:58:17.437145 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2026-04-09 00:58:17.437151 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2026-04-09 00:58:17.437157 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.89s 2026-04-09 00:58:17.437162 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2026-04-09 00:58:17.437168 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-09 00:58:17.437174 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.58s 2026-04-09 00:58:17.437179 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-04-09 00:58:17.437185 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2026-04-09 00:58:17.437191 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-04-09 00:58:17.437196 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-04-09 00:58:17.437202 | orchestrator | 2026-04-09 00:58:17 | INFO  | Task b12feb7d-98df-4613-b9fe-bb421a7a179f is in state SUCCESS 2026-04-09 00:58:17.437208 | orchestrator | 2026-04-09 00:58:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:17.437218 | orchestrator | 2026-04-09 00:58:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:17.438093 | orchestrator | 2026-04-09 00:58:17 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:17.438137 | orchestrator | 2026-04-09 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:20.465121 | orchestrator | 2026-04-09 00:58:20 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:20.466061 | orchestrator | 2026-04-09 00:58:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:20.467224 | orchestrator | 2026-04-09 00:58:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:20.468412 | orchestrator | 2026-04-09 00:58:20 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:20.468510 | orchestrator | 2026-04-09 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:23.500822 | orchestrator | 2026-04-09 00:58:23 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:23.502706 | orchestrator | 2026-04-09 00:58:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:23.504353 | orchestrator | 2026-04-09 00:58:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:23.505860 | orchestrator | 2026-04-09 00:58:23 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:23.505935 | orchestrator | 2026-04-09 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:26.561988 | orchestrator | 2026-04-09 00:58:26 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:26.562776 | orchestrator | 2026-04-09 00:58:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:26.564181 | orchestrator | 2026-04-09 00:58:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:26.566765 | orchestrator | 2026-04-09 00:58:26 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:26.566815 | orchestrator | 2026-04-09 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:29.612872 | orchestrator | 2026-04-09 00:58:29 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:29.612973 | orchestrator | 2026-04-09 00:58:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:29.612984 | orchestrator | 2026-04-09 00:58:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:29.612992 | orchestrator | 2026-04-09 00:58:29 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:29.613000 | orchestrator | 2026-04-09 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:32.641942 | orchestrator | 2026-04-09 00:58:32 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:32.646609 | orchestrator | 2026-04-09 00:58:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:32.648799 | orchestrator | 2026-04-09 00:58:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:32.650714 | orchestrator | 2026-04-09 00:58:32 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:32.651215 | orchestrator | 2026-04-09 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:35.696240 | orchestrator | 2026-04-09 00:58:35 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:35.696313 | orchestrator | 2026-04-09 00:58:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:35.696842 | orchestrator | 2026-04-09 00:58:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:35.697639 | orchestrator | 2026-04-09 00:58:35 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:35.697712 | orchestrator | 2026-04-09 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:38.756300 | orchestrator | 2026-04-09 00:58:38 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:38.759756 | orchestrator | 2026-04-09 00:58:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:38.761127 | orchestrator | 2026-04-09 00:58:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:38.762147 | orchestrator | 2026-04-09 00:58:38 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:38.762193 | orchestrator | 2026-04-09 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:41.812807 | orchestrator | 2026-04-09 00:58:41 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:41.814085 | orchestrator | 2026-04-09 00:58:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:41.815946 | orchestrator | 2026-04-09 00:58:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:41.817182 | orchestrator | 2026-04-09 00:58:41 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:41.817348 | orchestrator | 2026-04-09 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:44.862218 | orchestrator | 2026-04-09 00:58:44 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:44.863827 | orchestrator | 2026-04-09 00:58:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:44.864468 | orchestrator | 2026-04-09 00:58:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:44.865137 | orchestrator | 2026-04-09 00:58:44 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:44.865209 | orchestrator | 2026-04-09 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:47.901678 | orchestrator | 2026-04-09 00:58:47 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:47.903803 | orchestrator | 2026-04-09 00:58:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:47.905438 | orchestrator | 2026-04-09 00:58:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:47.907170 | orchestrator | 2026-04-09 00:58:47 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:47.907223 | orchestrator | 2026-04-09 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:50.956875 | orchestrator | 2026-04-09 00:58:50 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:50.958341 | orchestrator | 2026-04-09 00:58:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:50.959678 | orchestrator | 2026-04-09 00:58:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:50.961165 | orchestrator | 2026-04-09 00:58:50 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:50.961202 | orchestrator | 2026-04-09 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:54.002599 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:54.003702 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:54.005019 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:54.006676 | orchestrator | 2026-04-09 00:58:54 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:54.006737 | orchestrator | 2026-04-09 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:58:57.061980 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:58:57.063567 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:58:57.065576 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:58:57.067898 | orchestrator | 2026-04-09 00:58:57 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:58:57.067984 | orchestrator | 2026-04-09 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:00.114192 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:00.114336 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:00.118254 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:00.121437 | orchestrator | 2026-04-09 00:59:00 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:00.121538 | orchestrator | 2026-04-09 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:03.165011 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:03.165685 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:03.166677 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:03.167826 | orchestrator | 2026-04-09 00:59:03 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:03.167860 | orchestrator | 2026-04-09 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:06.220091 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:06.225249 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:06.225308 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:06.225313 | orchestrator | 2026-04-09 00:59:06 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:06.225319 | orchestrator | 2026-04-09 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:09.260979 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:09.261156 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:09.263405 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:09.264172 | orchestrator | 2026-04-09 00:59:09 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:09.264243 | orchestrator | 2026-04-09 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:12.317518 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:12.317605 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:12.318121 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:12.319956 | orchestrator | 2026-04-09 00:59:12 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:12.319981 | orchestrator | 2026-04-09 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:15.360791 | orchestrator | 2026-04-09 00:59:15 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:15.362063 | orchestrator | 2026-04-09 00:59:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:15.364325 | orchestrator | 2026-04-09 00:59:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:15.367664 | orchestrator | 2026-04-09 00:59:15 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:15.367996 | orchestrator | 2026-04-09 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:18.412146 | orchestrator | 2026-04-09 00:59:18 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:18.412523 | orchestrator | 2026-04-09 00:59:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:18.413598 | orchestrator | 2026-04-09 00:59:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:18.414441 | orchestrator | 2026-04-09 00:59:18 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:18.416107 | orchestrator | 2026-04-09 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:21.474700 | orchestrator | 2026-04-09 00:59:21 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:21.476837 | orchestrator | 2026-04-09 00:59:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:21.479139 | orchestrator | 2026-04-09 00:59:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:21.480991 | orchestrator | 2026-04-09 00:59:21 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:21.481044 | orchestrator | 2026-04-09 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:24.527252 | orchestrator | 2026-04-09 00:59:24 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:24.528047 | orchestrator | 2026-04-09 00:59:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:24.528666 | orchestrator | 2026-04-09 00:59:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:24.530483 | orchestrator | 2026-04-09 00:59:24 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:24.530520 | orchestrator | 2026-04-09 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:27.567754 | orchestrator | 2026-04-09 00:59:27 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state STARTED 2026-04-09 00:59:27.569367 | orchestrator | 2026-04-09 00:59:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:27.571398 | orchestrator | 2026-04-09 00:59:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:27.573404 | orchestrator | 2026-04-09 00:59:27 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:27.573453 | orchestrator | 2026-04-09 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:30.620848 | orchestrator | 2026-04-09 00:59:30 | INFO  | Task d8de9635-49f1-4929-b114-52e31e0a0c2f is in state SUCCESS 2026-04-09 00:59:30.622448 | orchestrator | 2026-04-09 00:59:30.622494 | orchestrator | 2026-04-09 00:59:30.622501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:59:30.622506 | orchestrator | 2026-04-09 00:59:30.622511 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:59:30.622515 | orchestrator | Thursday 09 April 2026 00:58:18 +0000 (0:00:00.359) 0:00:00.359 ******** 2026-04-09 00:59:30.622520 | orchestrator | ok: [testbed-manager] 2026-04-09 00:59:30.622525 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:30.622530 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:30.622534 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:30.622538 | orchestrator | ok: [testbed-node-3] 2026-04-09 00:59:30.622542 | orchestrator | ok: [testbed-node-4] 2026-04-09 00:59:30.622546 | orchestrator | ok: [testbed-node-5] 2026-04-09 00:59:30.622550 | orchestrator | 2026-04-09 00:59:30.622554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:59:30.622574 | orchestrator | Thursday 09 April 2026 00:58:18 +0000 (0:00:00.666) 0:00:01.025 ******** 2026-04-09 00:59:30.622580 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622584 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622623 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622627 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622631 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622635 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622639 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-09 00:59:30.622643 | orchestrator | 2026-04-09 00:59:30.622647 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-09 00:59:30.622651 | orchestrator | 2026-04-09 00:59:30.622655 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 00:59:30.622660 | orchestrator | Thursday 09 April 2026 00:58:19 +0000 (0:00:00.711) 0:00:01.737 ******** 2026-04-09 00:59:30.622665 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:59:30.622671 | orchestrator | 2026-04-09 00:59:30.622675 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-09 00:59:30.622678 | orchestrator | Thursday 09 April 2026 00:58:20 +0000 (0:00:00.950) 0:00:02.687 ******** 2026-04-09 00:59:30.622696 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:59:30.622704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622736 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:30.622779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.622783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.622966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.622979 | orchestrator | 2026-04-09 00:59:30.622983 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-09 00:59:30.622988 | orchestrator | Thursday 09 April 2026 00:58:23 +0000 (0:00:03.097) 0:00:05.785 ******** 2026-04-09 00:59:30.622992 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-09 00:59:30.622996 | orchestrator | 2026-04-09 00:59:30.623000 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-09 00:59:30.623004 | orchestrator | Thursday 09 April 2026 00:58:24 +0000 (0:00:01.164) 0:00:06.950 ******** 2026-04-09 00:59:30.623011 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:59:30.623020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.623068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623094 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623140 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:30.623149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.623832 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.623858 | orchestrator | 2026-04-09 00:59:30.623863 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-09 00:59:30.623868 | orchestrator | Thursday 09 April 2026 00:58:30 +0000 (0:00:06.110) 0:00:13.060 ******** 2026-04-09 00:59:30.623878 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:59:30.623889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.623894 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.623898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.623902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623918 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.623922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.623939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:30.623943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.623951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.623958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.623962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623966 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.623975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623979 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.623983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623987 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.623992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.623996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624015 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.624019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624049 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.624054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624060 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.624067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624074 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.624081 | orchestrator | 2026-04-09 00:59:30.624090 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-09 00:59:30.624097 | orchestrator | Thursday 09 April 2026 00:58:32 +0000 (0:00:01.667) 0:00:14.727 ******** 2026-04-09 00:59:30.624108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:59:30.624115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624134 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624480 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:30.624510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624518 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.624535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624539 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.624547 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624551 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.624555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624638 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.624642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.624656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.624665 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.624671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624680 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.624684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.624689 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.624697 | orchestrator | 2026-04-09 00:59:30.624712 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-09 00:59:30.624717 | orchestrator | Thursday 09 April 2026 00:58:35 +0000 (0:00:02.388) 0:00:17.116 ******** 2026-04-09 00:59:30.624722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:59:30.624727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624785 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.624791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624842 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624866 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:30.624886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624903 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.624919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.624956 | orchestrator | 2026-04-09 00:59:30.624962 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-09 00:59:30.624967 | orchestrator | Thursday 09 April 2026 00:58:41 +0000 (0:00:06.531) 0:00:23.648 ******** 2026-04-09 00:59:30.624975 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:59:30.624980 | orchestrator | 2026-04-09 00:59:30.624984 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-09 00:59:30.624988 | orchestrator | Thursday 09 April 2026 00:58:42 +0000 (0:00:01.009) 0:00:24.657 ******** 2026-04-09 00:59:30.624992 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.624996 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625000 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625004 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625008 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625012 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625016 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625019 | orchestrator | 2026-04-09 00:59:30.625024 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-09 00:59:30.625027 | orchestrator | Thursday 09 April 2026 00:58:43 +0000 (0:00:00.803) 0:00:25.461 ******** 2026-04-09 00:59:30.625031 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:59:30.625035 | orchestrator | 2026-04-09 00:59:30.625039 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-09 00:59:30.625043 | orchestrator | Thursday 09 April 2026 00:58:44 +0000 (0:00:00.764) 0:00:26.225 ******** 2026-04-09 00:59:30.625047 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625052 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625056 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625060 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625064 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625068 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:59:30.625072 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625080 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625089 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625094 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:30.625098 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625112 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625121 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625125 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-09 00:59:30.625130 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625139 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625148 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625155 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-09 00:59:30.625161 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625170 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625179 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625184 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-09 00:59:30.625189 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625198 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625203 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625207 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625212 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-09 00:59:30.625216 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625221 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625225 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-09 00:59:30.625230 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-09 00:59:30.625234 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-09 00:59:30.625239 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-09 00:59:30.625244 | orchestrator | 2026-04-09 00:59:30.625248 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-09 00:59:30.625255 | orchestrator | Thursday 09 April 2026 00:58:45 +0000 (0:00:01.645) 0:00:27.870 ******** 2026-04-09 00:59:30.625260 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625265 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625270 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625275 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625279 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625284 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625288 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625293 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625298 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625302 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625307 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-09 00:59:30.625311 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625319 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-09 00:59:30.625324 | orchestrator | 2026-04-09 00:59:30.625328 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-09 00:59:30.625333 | orchestrator | Thursday 09 April 2026 00:58:59 +0000 (0:00:13.958) 0:00:41.829 ******** 2026-04-09 00:59:30.625338 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625342 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625347 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625351 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625356 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625360 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625365 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625370 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625374 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625379 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625384 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-09 00:59:30.625388 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625393 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-09 00:59:30.625397 | orchestrator | 2026-04-09 00:59:30.625401 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-09 00:59:30.625405 | orchestrator | Thursday 09 April 2026 00:59:02 +0000 (0:00:03.125) 0:00:44.955 ******** 2026-04-09 00:59:30.625409 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625413 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625417 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625421 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625425 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625429 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625435 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625440 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625444 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-09 00:59:30.625448 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625452 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625456 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-09 00:59:30.625460 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625464 | orchestrator | 2026-04-09 00:59:30.625467 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-09 00:59:30.625471 | orchestrator | Thursday 09 April 2026 00:59:04 +0000 (0:00:01.558) 0:00:46.514 ******** 2026-04-09 00:59:30.625475 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:59:30.625479 | orchestrator | 2026-04-09 00:59:30.625483 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-09 00:59:30.625487 | orchestrator | Thursday 09 April 2026 00:59:05 +0000 (0:00:00.732) 0:00:47.246 ******** 2026-04-09 00:59:30.625494 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.625498 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625502 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625506 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625510 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625514 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625521 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625525 | orchestrator | 2026-04-09 00:59:30.625529 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-09 00:59:30.625533 | orchestrator | Thursday 09 April 2026 00:59:05 +0000 (0:00:00.717) 0:00:47.963 ******** 2026-04-09 00:59:30.625537 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.625541 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625545 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625549 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625553 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:30.625557 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:30.625561 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:30.625565 | orchestrator | 2026-04-09 00:59:30.625569 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-09 00:59:30.625573 | orchestrator | Thursday 09 April 2026 00:59:07 +0000 (0:00:01.728) 0:00:49.692 ******** 2026-04-09 00:59:30.625576 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625580 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625604 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.625610 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625621 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625628 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625634 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625640 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625646 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625653 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625665 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625671 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-09 00:59:30.625677 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625683 | orchestrator | 2026-04-09 00:59:30.625688 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-09 00:59:30.625695 | orchestrator | Thursday 09 April 2026 00:59:08 +0000 (0:00:01.259) 0:00:50.952 ******** 2026-04-09 00:59:30.625701 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625708 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625714 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625721 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625733 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625739 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625745 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625751 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625762 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625768 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-09 00:59:30.625774 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-09 00:59:30.625784 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625791 | orchestrator | 2026-04-09 00:59:30.625797 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-09 00:59:30.625803 | orchestrator | Thursday 09 April 2026 00:59:10 +0000 (0:00:01.535) 0:00:52.487 ******** 2026-04-09 00:59:30.625810 | orchestrator | [WARNING]: Skipped 2026-04-09 00:59:30.625817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-09 00:59:30.625821 | orchestrator | due to this access issue: 2026-04-09 00:59:30.625825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-09 00:59:30.625829 | orchestrator | not a directory 2026-04-09 00:59:30.625833 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-09 00:59:30.625837 | orchestrator | 2026-04-09 00:59:30.625841 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-09 00:59:30.625845 | orchestrator | Thursday 09 April 2026 00:59:11 +0000 (0:00:01.066) 0:00:53.553 ******** 2026-04-09 00:59:30.625849 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.625853 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625857 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625861 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625865 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625869 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625872 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625876 | orchestrator | 2026-04-09 00:59:30.625880 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-09 00:59:30.625884 | orchestrator | Thursday 09 April 2026 00:59:12 +0000 (0:00:00.608) 0:00:54.162 ******** 2026-04-09 00:59:30.625888 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.625892 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.625896 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.625900 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.625904 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.625912 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.625916 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.625920 | orchestrator | 2026-04-09 00:59:30.625924 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-09 00:59:30.625928 | orchestrator | Thursday 09 April 2026 00:59:12 +0000 (0:00:00.684) 0:00:54.847 ******** 2026-04-09 00:59:30.625933 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-09 00:59:30.625938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-09 00:59:30.625978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.625986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.625990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.625996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626125 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626186 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:30.626197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-09 00:59:30.626232 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-09 00:59:30.626252 | orchestrator | 2026-04-09 00:59:30.626256 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-09 00:59:30.626266 | orchestrator | Thursday 09 April 2026 00:59:16 +0000 (0:00:03.956) 0:00:58.804 ******** 2026-04-09 00:59:30.626270 | orchestrator | changed: [testbed-manager] => { 2026-04-09 00:59:30.626275 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626279 | orchestrator | } 2026-04-09 00:59:30.626283 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:59:30.626287 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626291 | orchestrator | } 2026-04-09 00:59:30.626295 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:59:30.626299 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626303 | orchestrator | } 2026-04-09 00:59:30.626307 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:59:30.626311 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626315 | orchestrator | } 2026-04-09 00:59:30.626319 | orchestrator | changed: [testbed-node-3] => { 2026-04-09 00:59:30.626323 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626327 | orchestrator | } 2026-04-09 00:59:30.626331 | orchestrator | changed: [testbed-node-4] => { 2026-04-09 00:59:30.626335 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626339 | orchestrator | } 2026-04-09 00:59:30.626343 | orchestrator | changed: [testbed-node-5] => { 2026-04-09 00:59:30.626347 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:30.626351 | orchestrator | } 2026-04-09 00:59:30.626355 | orchestrator | 2026-04-09 00:59:30.626359 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:59:30.626363 | orchestrator | Thursday 09 April 2026 00:59:17 +0000 (0:00:00.881) 0:00:59.685 ******** 2026-04-09 00:59:30.626367 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-09 00:59:30.626375 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:30.626415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626419 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626427 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:30.626434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-09 00:59:30.626485 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.626489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626494 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:30.626498 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:30.626502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626510 | orchestrator | skipping: [testbed-node-3] 2026-04-09 00:59:30.626518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626541 | orchestrator | skipping: [testbed-node-4] 2026-04-09 00:59:30.626545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-09 00:59:30.626549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-09 00:59:30.626557 | orchestrator | skipping: [testbed-node-5] 2026-04-09 00:59:30.626561 | orchestrator | 2026-04-09 00:59:30.626565 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-09 00:59:30.626569 | orchestrator | Thursday 09 April 2026 00:59:19 +0000 (0:00:02.125) 0:01:01.811 ******** 2026-04-09 00:59:30.626574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-09 00:59:30.626578 | orchestrator | skipping: [testbed-manager] 2026-04-09 00:59:30.626582 | orchestrator | 2026-04-09 00:59:30.626603 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626607 | orchestrator | Thursday 09 April 2026 00:59:20 +0000 (0:00:01.166) 0:01:02.978 ******** 2026-04-09 00:59:30.626611 | orchestrator | 2026-04-09 00:59:30.626615 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626623 | orchestrator | Thursday 09 April 2026 00:59:20 +0000 (0:00:00.067) 0:01:03.046 ******** 2026-04-09 00:59:30.626627 | orchestrator | 2026-04-09 00:59:30.626631 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626635 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.227) 0:01:03.273 ******** 2026-04-09 00:59:30.626639 | orchestrator | 2026-04-09 00:59:30.626645 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626649 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.061) 0:01:03.335 ******** 2026-04-09 00:59:30.626654 | orchestrator | 2026-04-09 00:59:30.626658 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626662 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.061) 0:01:03.396 ******** 2026-04-09 00:59:30.626666 | orchestrator | 2026-04-09 00:59:30.626670 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626674 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.057) 0:01:03.454 ******** 2026-04-09 00:59:30.626678 | orchestrator | 2026-04-09 00:59:30.626681 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-09 00:59:30.626685 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.061) 0:01:03.516 ******** 2026-04-09 00:59:30.626689 | orchestrator | 2026-04-09 00:59:30.626693 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-09 00:59:30.626697 | orchestrator | Thursday 09 April 2026 00:59:21 +0000 (0:00:00.085) 0:01:03.601 ******** 2026-04-09 00:59:30.626706 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yk5jwlgi/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yk5jwlgi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_yk5jwlgi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yk5jwlgi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-server: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626712 | orchestrator | 2026-04-09 00:59:30.626720 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-09 00:59:30.626724 | orchestrator | Thursday 09 April 2026 00:59:23 +0000 (0:00:02.363) 0:01:05.964 ******** 2026-04-09 00:59:30.626734 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wseyfupj/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wseyfupj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_wseyfupj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wseyfupj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626742 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gomr_uhw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gomr_uhw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gomr_uhw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gomr_uhw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626758 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kwml5lnl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kwml5lnl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_kwml5lnl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kwml5lnl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626776 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__od29zj9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__od29zj9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload__od29zj9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__od29zj9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626793 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_14xgi5m5/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_14xgi5m5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_14xgi5m5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_14xgi5m5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626812 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hg7_bnc1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hg7_bnc1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_hg7_bnc1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hg7_bnc1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 400 Client Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F%2Fprometheus-node-exporter: Bad Request (\"invalid reference format\")\\n'"} 2026-04-09 00:59:30.626820 | orchestrator | 2026-04-09 00:59:30.626826 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:30.626832 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-09 00:59:30.626840 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-09 00:59:30.626846 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-09 00:59:30.626853 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-09 00:59:30.626860 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-09 00:59:30.626867 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-09 00:59:30.626878 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-09 00:59:30.626884 | orchestrator | 2026-04-09 00:59:30.626890 | orchestrator | 2026-04-09 00:59:30.626896 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:30.626903 | orchestrator | Thursday 09 April 2026 00:59:27 +0000 (0:00:04.038) 0:01:10.004 ******** 2026-04-09 00:59:30.626909 | orchestrator | =============================================================================== 2026-04-09 00:59:30.626915 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.96s 2026-04-09 00:59:30.626922 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.53s 2026-04-09 00:59:30.626928 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.11s 2026-04-09 00:59:30.626935 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 4.04s 2026-04-09 00:59:30.626941 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 3.96s 2026-04-09 00:59:30.626947 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.13s 2026-04-09 00:59:30.626954 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.10s 2026-04-09 00:59:30.626960 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.39s 2026-04-09 00:59:30.626966 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.36s 2026-04-09 00:59:30.626976 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.13s 2026-04-09 00:59:30.626983 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.73s 2026-04-09 00:59:30.626990 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.67s 2026-04-09 00:59:30.626997 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.65s 2026-04-09 00:59:30.627002 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.56s 2026-04-09 00:59:30.627006 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.54s 2026-04-09 00:59:30.627011 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.26s 2026-04-09 00:59:30.627015 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.17s 2026-04-09 00:59:30.627020 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.16s 2026-04-09 00:59:30.627025 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.07s 2026-04-09 00:59:30.627029 | orchestrator | prometheus : Find custom prometheus alert rules files ------------------- 1.01s 2026-04-09 00:59:30.627034 | orchestrator | 2026-04-09 00:59:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:30.627038 | orchestrator | 2026-04-09 00:59:30 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:30.627043 | orchestrator | 2026-04-09 00:59:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:30.627570 | orchestrator | 2026-04-09 00:59:30 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:30.627737 | orchestrator | 2026-04-09 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:33.672756 | orchestrator | 2026-04-09 00:59:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:33.674952 | orchestrator | 2026-04-09 00:59:33 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:33.676141 | orchestrator | 2026-04-09 00:59:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:33.678223 | orchestrator | 2026-04-09 00:59:33 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:33.678352 | orchestrator | 2026-04-09 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:36.717114 | orchestrator | 2026-04-09 00:59:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:36.717190 | orchestrator | 2026-04-09 00:59:36 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:36.717203 | orchestrator | 2026-04-09 00:59:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:36.717634 | orchestrator | 2026-04-09 00:59:36 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:36.717649 | orchestrator | 2026-04-09 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:39.758979 | orchestrator | 2026-04-09 00:59:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:39.760047 | orchestrator | 2026-04-09 00:59:39 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:39.760866 | orchestrator | 2026-04-09 00:59:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:39.762687 | orchestrator | 2026-04-09 00:59:39 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:39.762733 | orchestrator | 2026-04-09 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:42.803245 | orchestrator | 2026-04-09 00:59:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:42.804995 | orchestrator | 2026-04-09 00:59:42 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:42.806841 | orchestrator | 2026-04-09 00:59:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:42.808262 | orchestrator | 2026-04-09 00:59:42 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:42.808316 | orchestrator | 2026-04-09 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:45.846237 | orchestrator | 2026-04-09 00:59:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:45.847240 | orchestrator | 2026-04-09 00:59:45 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state STARTED 2026-04-09 00:59:45.847894 | orchestrator | 2026-04-09 00:59:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:45.848841 | orchestrator | 2026-04-09 00:59:45 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:45.848863 | orchestrator | 2026-04-09 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:48.882540 | orchestrator | 2026-04-09 00:59:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:48.884224 | orchestrator | 2026-04-09 00:59:48 | INFO  | Task 62002078-a701-4047-8cd7-1d6b9a01e9b1 is in state SUCCESS 2026-04-09 00:59:48.885713 | orchestrator | 2026-04-09 00:59:48.885759 | orchestrator | 2026-04-09 00:59:48.885767 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-09 00:59:48.885773 | orchestrator | 2026-04-09 00:59:48.885777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-09 00:59:48.885782 | orchestrator | Thursday 09 April 2026 00:59:31 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-04-09 00:59:48.885786 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:48.885791 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:48.885796 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:48.885800 | orchestrator | 2026-04-09 00:59:48.885822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-09 00:59:48.885827 | orchestrator | Thursday 09 April 2026 00:59:31 +0000 (0:00:00.259) 0:00:00.527 ******** 2026-04-09 00:59:48.885831 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-09 00:59:48.885836 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-09 00:59:48.885840 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-09 00:59:48.885844 | orchestrator | 2026-04-09 00:59:48.885848 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-09 00:59:48.885852 | orchestrator | 2026-04-09 00:59:48.885856 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 00:59:48.885861 | orchestrator | Thursday 09 April 2026 00:59:31 +0000 (0:00:00.262) 0:00:00.790 ******** 2026-04-09 00:59:48.885865 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:48.885870 | orchestrator | 2026-04-09 00:59:48.885875 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-09 00:59:48.885879 | orchestrator | Thursday 09 April 2026 00:59:32 +0000 (0:00:00.533) 0:00:01.323 ******** 2026-04-09 00:59:48.885885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885900 | orchestrator | 2026-04-09 00:59:48.885904 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-09 00:59:48.885921 | orchestrator | Thursday 09 April 2026 00:59:33 +0000 (0:00:01.014) 0:00:02.337 ******** 2026-04-09 00:59:48.885925 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:48.885930 | orchestrator | 2026-04-09 00:59:48.885934 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-09 00:59:48.885943 | orchestrator | Thursday 09 April 2026 00:59:33 +0000 (0:00:00.847) 0:00:03.185 ******** 2026-04-09 00:59:48.885947 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-09 00:59:48.885951 | orchestrator | 2026-04-09 00:59:48.885955 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-09 00:59:48.885969 | orchestrator | Thursday 09 April 2026 00:59:34 +0000 (0:00:00.483) 0:00:03.668 ******** 2026-04-09 00:59:48.885974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.885986 | orchestrator | 2026-04-09 00:59:48.885991 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-09 00:59:48.885995 | orchestrator | Thursday 09 April 2026 00:59:36 +0000 (0:00:01.590) 0:00:05.258 ******** 2026-04-09 00:59:48.885999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886004 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:48.886049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886059 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:48.886064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886068 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:48.886072 | orchestrator | 2026-04-09 00:59:48.886076 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-09 00:59:48.886080 | orchestrator | Thursday 09 April 2026 00:59:36 +0000 (0:00:00.425) 0:00:05.684 ******** 2026-04-09 00:59:48.886084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886088 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:48.886092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886096 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:48.886100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886107 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:48.886111 | orchestrator | 2026-04-09 00:59:48.886115 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-09 00:59:48.886120 | orchestrator | Thursday 09 April 2026 00:59:37 +0000 (0:00:00.656) 0:00:06.340 ******** 2026-04-09 00:59:48.886131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886144 | orchestrator | 2026-04-09 00:59:48.886148 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-09 00:59:48.886152 | orchestrator | Thursday 09 April 2026 00:59:38 +0000 (0:00:01.261) 0:00:07.601 ******** 2026-04-09 00:59:48.886156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886179 | orchestrator | 2026-04-09 00:59:48.886183 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-09 00:59:48.886187 | orchestrator | Thursday 09 April 2026 00:59:39 +0000 (0:00:01.580) 0:00:09.182 ******** 2026-04-09 00:59:48.886191 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:48.886195 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:48.886199 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:48.886203 | orchestrator | 2026-04-09 00:59:48.886207 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-09 00:59:48.886211 | orchestrator | Thursday 09 April 2026 00:59:40 +0000 (0:00:00.278) 0:00:09.460 ******** 2026-04-09 00:59:48.886215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 00:59:48.886219 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 00:59:48.886223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-09 00:59:48.886227 | orchestrator | 2026-04-09 00:59:48.886231 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-09 00:59:48.886235 | orchestrator | Thursday 09 April 2026 00:59:41 +0000 (0:00:01.268) 0:00:10.728 ******** 2026-04-09 00:59:48.886239 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 00:59:48.886244 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 00:59:48.886248 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-09 00:59:48.886252 | orchestrator | 2026-04-09 00:59:48.886256 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-09 00:59:48.886260 | orchestrator | Thursday 09 April 2026 00:59:42 +0000 (0:00:01.274) 0:00:12.003 ******** 2026-04-09 00:59:48.886264 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-09 00:59:48.886268 | orchestrator | 2026-04-09 00:59:48.886272 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-09 00:59:48.886276 | orchestrator | Thursday 09 April 2026 00:59:43 +0000 (0:00:00.714) 0:00:12.718 ******** 2026-04-09 00:59:48.886280 | orchestrator | ok: [testbed-node-0] 2026-04-09 00:59:48.886287 | orchestrator | ok: [testbed-node-1] 2026-04-09 00:59:48.886291 | orchestrator | ok: [testbed-node-2] 2026-04-09 00:59:48.886295 | orchestrator | 2026-04-09 00:59:48.886299 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-09 00:59:48.886304 | orchestrator | Thursday 09 April 2026 00:59:44 +0000 (0:00:00.821) 0:00:13.539 ******** 2026-04-09 00:59:48.886308 | orchestrator | changed: [testbed-node-0] 2026-04-09 00:59:48.886313 | orchestrator | changed: [testbed-node-1] 2026-04-09 00:59:48.886390 | orchestrator | changed: [testbed-node-2] 2026-04-09 00:59:48.886395 | orchestrator | 2026-04-09 00:59:48.886400 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-09 00:59:48.886405 | orchestrator | Thursday 09 April 2026 00:59:45 +0000 (0:00:01.210) 0:00:14.749 ******** 2026-04-09 00:59:48.886410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-09 00:59:48.886432 | orchestrator | 2026-04-09 00:59:48.886437 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-09 00:59:48.886441 | orchestrator | Thursday 09 April 2026 00:59:46 +0000 (0:00:00.955) 0:00:15.704 ******** 2026-04-09 00:59:48.886446 | orchestrator | changed: [testbed-node-0] => { 2026-04-09 00:59:48.886451 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:48.886455 | orchestrator | } 2026-04-09 00:59:48.886460 | orchestrator | changed: [testbed-node-1] => { 2026-04-09 00:59:48.886464 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:48.886469 | orchestrator | } 2026-04-09 00:59:48.886473 | orchestrator | changed: [testbed-node-2] => { 2026-04-09 00:59:48.886478 | orchestrator |  "msg": "Notifying handlers" 2026-04-09 00:59:48.886482 | orchestrator | } 2026-04-09 00:59:48.886487 | orchestrator | 2026-04-09 00:59:48.886491 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-09 00:59:48.886500 | orchestrator | Thursday 09 April 2026 00:59:46 +0000 (0:00:00.296) 0:00:16.001 ******** 2026-04-09 00:59:48.886505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886515 | orchestrator | skipping: [testbed-node-0] 2026-04-09 00:59:48.886519 | orchestrator | skipping: [testbed-node-1] 2026-04-09 00:59:48.886524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release//grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-09 00:59:48.886531 | orchestrator | skipping: [testbed-node-2] 2026-04-09 00:59:48.886535 | orchestrator | 2026-04-09 00:59:48.886540 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-09 00:59:48.886544 | orchestrator | Thursday 09 April 2026 00:59:47 +0000 (0:00:00.636) 0:00:16.638 ******** 2026-04-09 00:59:48.886549 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-09 00:59:48.886553 | orchestrator | 2026-04-09 00:59:48.886558 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-09 00:59:48.886639 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-09 00:59:48.886650 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:59:48.886657 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-09 00:59:48.886664 | orchestrator | 2026-04-09 00:59:48.886670 | orchestrator | 2026-04-09 00:59:48.886677 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-09 00:59:48.886690 | orchestrator | Thursday 09 April 2026 00:59:48 +0000 (0:00:00.730) 0:00:17.368 ******** 2026-04-09 00:59:48.886696 | orchestrator | =============================================================================== 2026-04-09 00:59:48.886702 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.59s 2026-04-09 00:59:48.886708 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.58s 2026-04-09 00:59:48.886713 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2026-04-09 00:59:48.886719 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-04-09 00:59:48.886724 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-04-09 00:59:48.886730 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.21s 2026-04-09 00:59:48.886736 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.01s 2026-04-09 00:59:48.886742 | orchestrator | service-check-containers : grafana | Check containers ------------------- 0.96s 2026-04-09 00:59:48.886748 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-04-09 00:59:48.886754 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.82s 2026-04-09 00:59:48.886760 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.73s 2026-04-09 00:59:48.886766 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.71s 2026-04-09 00:59:48.886772 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2026-04-09 00:59:48.886778 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.64s 2026-04-09 00:59:48.886784 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.53s 2026-04-09 00:59:48.886790 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.48s 2026-04-09 00:59:48.886797 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.43s 2026-04-09 00:59:48.886803 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.30s 2026-04-09 00:59:48.886809 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.28s 2026-04-09 00:59:48.886813 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.26s 2026-04-09 00:59:48.886817 | orchestrator | 2026-04-09 00:59:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:48.888162 | orchestrator | 2026-04-09 00:59:48 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:48.888192 | orchestrator | 2026-04-09 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:51.927270 | orchestrator | 2026-04-09 00:59:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:51.928844 | orchestrator | 2026-04-09 00:59:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:51.930722 | orchestrator | 2026-04-09 00:59:51 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:51.930765 | orchestrator | 2026-04-09 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:54.972428 | orchestrator | 2026-04-09 00:59:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:54.974543 | orchestrator | 2026-04-09 00:59:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:54.976467 | orchestrator | 2026-04-09 00:59:54 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state STARTED 2026-04-09 00:59:54.976528 | orchestrator | 2026-04-09 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 00:59:58.015519 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 00:59:58.015640 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 00:59:58.016391 | orchestrator | 2026-04-09 00:59:58 | INFO  | Task 1b04543b-fe89-4512-a1a8-b890059dd592 is in state SUCCESS 2026-04-09 00:59:58.017357 | orchestrator | 2026-04-09 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:01.048571 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:01.050542 | orchestrator | 2026-04-09 01:00:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:01.050656 | orchestrator | 2026-04-09 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:04.091627 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:04.092953 | orchestrator | 2026-04-09 01:00:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:04.093004 | orchestrator | 2026-04-09 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:07.136408 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:07.138223 | orchestrator | 2026-04-09 01:00:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:07.138275 | orchestrator | 2026-04-09 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:10.180637 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:10.181157 | orchestrator | 2026-04-09 01:00:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:10.181916 | orchestrator | 2026-04-09 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:13.223009 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:13.225302 | orchestrator | 2026-04-09 01:00:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:13.225377 | orchestrator | 2026-04-09 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:16.266211 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:16.267685 | orchestrator | 2026-04-09 01:00:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:16.267991 | orchestrator | 2026-04-09 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:19.306275 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:19.308630 | orchestrator | 2026-04-09 01:00:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:19.308711 | orchestrator | 2026-04-09 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:22.353994 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:22.355810 | orchestrator | 2026-04-09 01:00:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:22.355861 | orchestrator | 2026-04-09 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:25.396684 | orchestrator | 2026-04-09 01:00:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:25.399284 | orchestrator | 2026-04-09 01:00:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:25.399369 | orchestrator | 2026-04-09 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:28.440789 | orchestrator | 2026-04-09 01:00:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:28.442153 | orchestrator | 2026-04-09 01:00:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:28.442216 | orchestrator | 2026-04-09 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:31.484303 | orchestrator | 2026-04-09 01:00:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:31.485682 | orchestrator | 2026-04-09 01:00:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:31.485732 | orchestrator | 2026-04-09 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:34.528367 | orchestrator | 2026-04-09 01:00:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:34.530408 | orchestrator | 2026-04-09 01:00:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:34.530466 | orchestrator | 2026-04-09 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:37.569699 | orchestrator | 2026-04-09 01:00:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:37.570227 | orchestrator | 2026-04-09 01:00:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:37.570250 | orchestrator | 2026-04-09 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:40.612743 | orchestrator | 2026-04-09 01:00:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:40.616986 | orchestrator | 2026-04-09 01:00:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:40.617077 | orchestrator | 2026-04-09 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:43.652203 | orchestrator | 2026-04-09 01:00:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:43.654267 | orchestrator | 2026-04-09 01:00:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:43.654314 | orchestrator | 2026-04-09 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:46.697947 | orchestrator | 2026-04-09 01:00:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:46.700459 | orchestrator | 2026-04-09 01:00:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:46.700561 | orchestrator | 2026-04-09 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:49.739449 | orchestrator | 2026-04-09 01:00:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:49.740394 | orchestrator | 2026-04-09 01:00:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:49.740443 | orchestrator | 2026-04-09 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:52.778555 | orchestrator | 2026-04-09 01:00:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:52.779984 | orchestrator | 2026-04-09 01:00:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:52.780031 | orchestrator | 2026-04-09 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:55.825210 | orchestrator | 2026-04-09 01:00:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:55.826793 | orchestrator | 2026-04-09 01:00:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:55.826833 | orchestrator | 2026-04-09 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:00:58.868638 | orchestrator | 2026-04-09 01:00:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:00:58.869936 | orchestrator | 2026-04-09 01:00:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:00:58.870075 | orchestrator | 2026-04-09 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:01.910320 | orchestrator | 2026-04-09 01:01:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:01.912711 | orchestrator | 2026-04-09 01:01:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:01.912784 | orchestrator | 2026-04-09 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:04.955656 | orchestrator | 2026-04-09 01:01:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:04.956801 | orchestrator | 2026-04-09 01:01:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:04.956855 | orchestrator | 2026-04-09 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:07.994155 | orchestrator | 2026-04-09 01:01:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:07.995640 | orchestrator | 2026-04-09 01:01:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:07.995693 | orchestrator | 2026-04-09 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:11.035554 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:11.037411 | orchestrator | 2026-04-09 01:01:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:11.037543 | orchestrator | 2026-04-09 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:14.080343 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:14.083157 | orchestrator | 2026-04-09 01:01:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:14.083231 | orchestrator | 2026-04-09 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:17.126437 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:17.127638 | orchestrator | 2026-04-09 01:01:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:17.127704 | orchestrator | 2026-04-09 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:20.168213 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:20.170274 | orchestrator | 2026-04-09 01:01:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:20.170325 | orchestrator | 2026-04-09 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:23.215021 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:23.216561 | orchestrator | 2026-04-09 01:01:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:23.216703 | orchestrator | 2026-04-09 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:26.258335 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:26.260073 | orchestrator | 2026-04-09 01:01:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:26.260139 | orchestrator | 2026-04-09 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:29.306238 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:29.308057 | orchestrator | 2026-04-09 01:01:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:29.308128 | orchestrator | 2026-04-09 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:32.351675 | orchestrator | 2026-04-09 01:01:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:32.353607 | orchestrator | 2026-04-09 01:01:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:32.353667 | orchestrator | 2026-04-09 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:35.386320 | orchestrator | 2026-04-09 01:01:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:35.388131 | orchestrator | 2026-04-09 01:01:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:35.388182 | orchestrator | 2026-04-09 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:38.428666 | orchestrator | 2026-04-09 01:01:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:38.428906 | orchestrator | 2026-04-09 01:01:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:38.429006 | orchestrator | 2026-04-09 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:41.479383 | orchestrator | 2026-04-09 01:01:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:41.481123 | orchestrator | 2026-04-09 01:01:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:41.481187 | orchestrator | 2026-04-09 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:44.518525 | orchestrator | 2026-04-09 01:01:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:44.520953 | orchestrator | 2026-04-09 01:01:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:44.521030 | orchestrator | 2026-04-09 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:47.560385 | orchestrator | 2026-04-09 01:01:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:47.561211 | orchestrator | 2026-04-09 01:01:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:47.561255 | orchestrator | 2026-04-09 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:50.603773 | orchestrator | 2026-04-09 01:01:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:50.604806 | orchestrator | 2026-04-09 01:01:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:50.604848 | orchestrator | 2026-04-09 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:53.647687 | orchestrator | 2026-04-09 01:01:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:53.650485 | orchestrator | 2026-04-09 01:01:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:53.650585 | orchestrator | 2026-04-09 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:56.693844 | orchestrator | 2026-04-09 01:01:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:56.695806 | orchestrator | 2026-04-09 01:01:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:56.695883 | orchestrator | 2026-04-09 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:01:59.745576 | orchestrator | 2026-04-09 01:01:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:01:59.748291 | orchestrator | 2026-04-09 01:01:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:01:59.748359 | orchestrator | 2026-04-09 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:02.793053 | orchestrator | 2026-04-09 01:02:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:02.794947 | orchestrator | 2026-04-09 01:02:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:02.794997 | orchestrator | 2026-04-09 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:05.840466 | orchestrator | 2026-04-09 01:02:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:05.841473 | orchestrator | 2026-04-09 01:02:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:05.841513 | orchestrator | 2026-04-09 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:08.881595 | orchestrator | 2026-04-09 01:02:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:08.882923 | orchestrator | 2026-04-09 01:02:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:08.882961 | orchestrator | 2026-04-09 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:11.923054 | orchestrator | 2026-04-09 01:02:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:11.924606 | orchestrator | 2026-04-09 01:02:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:11.924649 | orchestrator | 2026-04-09 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:14.970776 | orchestrator | 2026-04-09 01:02:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:14.972277 | orchestrator | 2026-04-09 01:02:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:14.972425 | orchestrator | 2026-04-09 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:18.025325 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:18.027297 | orchestrator | 2026-04-09 01:02:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:18.027385 | orchestrator | 2026-04-09 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:21.068307 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:21.069545 | orchestrator | 2026-04-09 01:02:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:21.069738 | orchestrator | 2026-04-09 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:24.112205 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:24.113915 | orchestrator | 2026-04-09 01:02:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:24.114310 | orchestrator | 2026-04-09 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:27.153550 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:27.156371 | orchestrator | 2026-04-09 01:02:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:27.156431 | orchestrator | 2026-04-09 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:30.197972 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:30.199773 | orchestrator | 2026-04-09 01:02:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:30.199831 | orchestrator | 2026-04-09 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:33.236922 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:33.237439 | orchestrator | 2026-04-09 01:02:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:33.237477 | orchestrator | 2026-04-09 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:36.278113 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:36.279666 | orchestrator | 2026-04-09 01:02:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:36.279709 | orchestrator | 2026-04-09 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:39.319928 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:39.321814 | orchestrator | 2026-04-09 01:02:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:39.321929 | orchestrator | 2026-04-09 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:42.363549 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:42.367203 | orchestrator | 2026-04-09 01:02:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:42.367288 | orchestrator | 2026-04-09 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:45.415949 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:45.417642 | orchestrator | 2026-04-09 01:02:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:45.417677 | orchestrator | 2026-04-09 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:48.459169 | orchestrator | 2026-04-09 01:02:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:48.461322 | orchestrator | 2026-04-09 01:02:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:48.461398 | orchestrator | 2026-04-09 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:51.512189 | orchestrator | 2026-04-09 01:02:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:51.514485 | orchestrator | 2026-04-09 01:02:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:51.514537 | orchestrator | 2026-04-09 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:54.559965 | orchestrator | 2026-04-09 01:02:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:54.562174 | orchestrator | 2026-04-09 01:02:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:54.562222 | orchestrator | 2026-04-09 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:02:57.604747 | orchestrator | 2026-04-09 01:02:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:02:57.605574 | orchestrator | 2026-04-09 01:02:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:02:57.605609 | orchestrator | 2026-04-09 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:00.649885 | orchestrator | 2026-04-09 01:03:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:00.651565 | orchestrator | 2026-04-09 01:03:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:00.651629 | orchestrator | 2026-04-09 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:03.691197 | orchestrator | 2026-04-09 01:03:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:03.692616 | orchestrator | 2026-04-09 01:03:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:03.692965 | orchestrator | 2026-04-09 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:06.740982 | orchestrator | 2026-04-09 01:03:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:06.742515 | orchestrator | 2026-04-09 01:03:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:06.742584 | orchestrator | 2026-04-09 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:09.782520 | orchestrator | 2026-04-09 01:03:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:09.783365 | orchestrator | 2026-04-09 01:03:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:09.783423 | orchestrator | 2026-04-09 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:12.822204 | orchestrator | 2026-04-09 01:03:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:12.823469 | orchestrator | 2026-04-09 01:03:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:12.823574 | orchestrator | 2026-04-09 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:15.876266 | orchestrator | 2026-04-09 01:03:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:15.878176 | orchestrator | 2026-04-09 01:03:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:15.878246 | orchestrator | 2026-04-09 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:18.914258 | orchestrator | 2026-04-09 01:03:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:18.916556 | orchestrator | 2026-04-09 01:03:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:18.916641 | orchestrator | 2026-04-09 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:21.963507 | orchestrator | 2026-04-09 01:03:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:21.965676 | orchestrator | 2026-04-09 01:03:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:21.965792 | orchestrator | 2026-04-09 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:25.008637 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:25.010137 | orchestrator | 2026-04-09 01:03:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:25.010194 | orchestrator | 2026-04-09 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:28.052484 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:28.053483 | orchestrator | 2026-04-09 01:03:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:28.053853 | orchestrator | 2026-04-09 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:31.104938 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:31.106419 | orchestrator | 2026-04-09 01:03:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:31.106492 | orchestrator | 2026-04-09 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:34.145126 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:34.146584 | orchestrator | 2026-04-09 01:03:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:34.146618 | orchestrator | 2026-04-09 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:37.190515 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:37.192116 | orchestrator | 2026-04-09 01:03:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:37.192155 | orchestrator | 2026-04-09 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:40.238051 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:40.238523 | orchestrator | 2026-04-09 01:03:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:40.238568 | orchestrator | 2026-04-09 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:43.280619 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:43.283380 | orchestrator | 2026-04-09 01:03:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:43.283445 | orchestrator | 2026-04-09 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:46.329212 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:46.329749 | orchestrator | 2026-04-09 01:03:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:46.329825 | orchestrator | 2026-04-09 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:49.367822 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:49.369386 | orchestrator | 2026-04-09 01:03:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:49.369440 | orchestrator | 2026-04-09 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:52.408380 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:52.409724 | orchestrator | 2026-04-09 01:03:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:52.409851 | orchestrator | 2026-04-09 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:55.453227 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:55.455027 | orchestrator | 2026-04-09 01:03:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:55.455085 | orchestrator | 2026-04-09 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:03:58.499160 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:03:58.501221 | orchestrator | 2026-04-09 01:03:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:03:58.501279 | orchestrator | 2026-04-09 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:01.551392 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:01.552386 | orchestrator | 2026-04-09 01:04:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:01.552431 | orchestrator | 2026-04-09 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:04.592155 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:04.594091 | orchestrator | 2026-04-09 01:04:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:04.594158 | orchestrator | 2026-04-09 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:07.637803 | orchestrator | 2026-04-09 01:04:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:07.639255 | orchestrator | 2026-04-09 01:04:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:07.639354 | orchestrator | 2026-04-09 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:10.680053 | orchestrator | 2026-04-09 01:04:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:10.682120 | orchestrator | 2026-04-09 01:04:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:10.682733 | orchestrator | 2026-04-09 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:13.724653 | orchestrator | 2026-04-09 01:04:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:13.727636 | orchestrator | 2026-04-09 01:04:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:13.727769 | orchestrator | 2026-04-09 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:16.768884 | orchestrator | 2026-04-09 01:04:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:16.770954 | orchestrator | 2026-04-09 01:04:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:16.771038 | orchestrator | 2026-04-09 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:19.808024 | orchestrator | 2026-04-09 01:04:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:19.809582 | orchestrator | 2026-04-09 01:04:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:19.809644 | orchestrator | 2026-04-09 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:22.854133 | orchestrator | 2026-04-09 01:04:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:22.856234 | orchestrator | 2026-04-09 01:04:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:22.856350 | orchestrator | 2026-04-09 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:25.897494 | orchestrator | 2026-04-09 01:04:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:25.899610 | orchestrator | 2026-04-09 01:04:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:25.899693 | orchestrator | 2026-04-09 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:28.940099 | orchestrator | 2026-04-09 01:04:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:28.941948 | orchestrator | 2026-04-09 01:04:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:28.942059 | orchestrator | 2026-04-09 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:31.983805 | orchestrator | 2026-04-09 01:04:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:31.985462 | orchestrator | 2026-04-09 01:04:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:31.985543 | orchestrator | 2026-04-09 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:35.025843 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:35.029032 | orchestrator | 2026-04-09 01:04:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:35.029093 | orchestrator | 2026-04-09 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:38.071166 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:38.073441 | orchestrator | 2026-04-09 01:04:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:38.073499 | orchestrator | 2026-04-09 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:41.106360 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:41.108188 | orchestrator | 2026-04-09 01:04:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:41.108289 | orchestrator | 2026-04-09 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:44.153031 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:44.154534 | orchestrator | 2026-04-09 01:04:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:44.154572 | orchestrator | 2026-04-09 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:47.196517 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:47.198416 | orchestrator | 2026-04-09 01:04:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:47.198458 | orchestrator | 2026-04-09 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:50.236053 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:50.238039 | orchestrator | 2026-04-09 01:04:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:50.238187 | orchestrator | 2026-04-09 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:53.282174 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:53.283401 | orchestrator | 2026-04-09 01:04:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:53.283513 | orchestrator | 2026-04-09 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:56.326975 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:56.327705 | orchestrator | 2026-04-09 01:04:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:56.327733 | orchestrator | 2026-04-09 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:04:59.371100 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:04:59.375797 | orchestrator | 2026-04-09 01:04:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:04:59.375844 | orchestrator | 2026-04-09 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:02.419498 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:02.421355 | orchestrator | 2026-04-09 01:05:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:02.421456 | orchestrator | 2026-04-09 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:05.459296 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:05.461373 | orchestrator | 2026-04-09 01:05:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:05.461443 | orchestrator | 2026-04-09 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:08.505469 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:08.509136 | orchestrator | 2026-04-09 01:05:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:08.509193 | orchestrator | 2026-04-09 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:11.556188 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:11.557989 | orchestrator | 2026-04-09 01:05:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:11.558091 | orchestrator | 2026-04-09 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:14.607783 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:14.609282 | orchestrator | 2026-04-09 01:05:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:14.609354 | orchestrator | 2026-04-09 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:17.650751 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:17.651367 | orchestrator | 2026-04-09 01:05:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:17.651390 | orchestrator | 2026-04-09 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:20.692319 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:20.694320 | orchestrator | 2026-04-09 01:05:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:20.694383 | orchestrator | 2026-04-09 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:23.740824 | orchestrator | 2026-04-09 01:05:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:23.742997 | orchestrator | 2026-04-09 01:05:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:23.743039 | orchestrator | 2026-04-09 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:26.788496 | orchestrator | 2026-04-09 01:05:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:26.789971 | orchestrator | 2026-04-09 01:05:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:26.790011 | orchestrator | 2026-04-09 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:29.834115 | orchestrator | 2026-04-09 01:05:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:29.835407 | orchestrator | 2026-04-09 01:05:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:29.835479 | orchestrator | 2026-04-09 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:32.884762 | orchestrator | 2026-04-09 01:05:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:32.886581 | orchestrator | 2026-04-09 01:05:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:32.886670 | orchestrator | 2026-04-09 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:35.929690 | orchestrator | 2026-04-09 01:05:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:35.932392 | orchestrator | 2026-04-09 01:05:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:35.932439 | orchestrator | 2026-04-09 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:38.977702 | orchestrator | 2026-04-09 01:05:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:38.978774 | orchestrator | 2026-04-09 01:05:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:38.978826 | orchestrator | 2026-04-09 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:42.015753 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:42.017108 | orchestrator | 2026-04-09 01:05:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:42.017237 | orchestrator | 2026-04-09 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:45.062130 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:45.064526 | orchestrator | 2026-04-09 01:05:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:45.064574 | orchestrator | 2026-04-09 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:48.102789 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:48.103319 | orchestrator | 2026-04-09 01:05:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:48.103339 | orchestrator | 2026-04-09 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:51.147052 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:51.148755 | orchestrator | 2026-04-09 01:05:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:51.148853 | orchestrator | 2026-04-09 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:54.195570 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:54.197950 | orchestrator | 2026-04-09 01:05:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:54.197992 | orchestrator | 2026-04-09 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:05:57.246646 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:05:57.248121 | orchestrator | 2026-04-09 01:05:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:05:57.248197 | orchestrator | 2026-04-09 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:00.296968 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:00.298396 | orchestrator | 2026-04-09 01:06:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:00.298581 | orchestrator | 2026-04-09 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:03.347034 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:03.348818 | orchestrator | 2026-04-09 01:06:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:03.348861 | orchestrator | 2026-04-09 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:06.393496 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:06.395400 | orchestrator | 2026-04-09 01:06:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:06.395478 | orchestrator | 2026-04-09 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:09.437262 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:09.439462 | orchestrator | 2026-04-09 01:06:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:09.439514 | orchestrator | 2026-04-09 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:12.483778 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:12.485590 | orchestrator | 2026-04-09 01:06:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:12.485638 | orchestrator | 2026-04-09 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:15.530802 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:15.532976 | orchestrator | 2026-04-09 01:06:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:15.533023 | orchestrator | 2026-04-09 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:18.570203 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:18.571370 | orchestrator | 2026-04-09 01:06:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:18.571443 | orchestrator | 2026-04-09 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:21.620032 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:21.622448 | orchestrator | 2026-04-09 01:06:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:21.622509 | orchestrator | 2026-04-09 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:24.666249 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:24.668850 | orchestrator | 2026-04-09 01:06:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:24.668936 | orchestrator | 2026-04-09 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:27.710961 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:27.712654 | orchestrator | 2026-04-09 01:06:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:27.712724 | orchestrator | 2026-04-09 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:30.755025 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:30.757249 | orchestrator | 2026-04-09 01:06:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:30.757293 | orchestrator | 2026-04-09 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:33.795662 | orchestrator | 2026-04-09 01:06:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:33.797760 | orchestrator | 2026-04-09 01:06:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:33.797799 | orchestrator | 2026-04-09 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:36.840704 | orchestrator | 2026-04-09 01:06:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:36.843147 | orchestrator | 2026-04-09 01:06:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:36.843193 | orchestrator | 2026-04-09 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:39.888645 | orchestrator | 2026-04-09 01:06:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:39.890323 | orchestrator | 2026-04-09 01:06:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:39.890543 | orchestrator | 2026-04-09 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:42.935499 | orchestrator | 2026-04-09 01:06:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:42.937321 | orchestrator | 2026-04-09 01:06:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:42.937411 | orchestrator | 2026-04-09 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:45.978104 | orchestrator | 2026-04-09 01:06:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:45.979779 | orchestrator | 2026-04-09 01:06:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:45.979858 | orchestrator | 2026-04-09 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:49.031329 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:49.033525 | orchestrator | 2026-04-09 01:06:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:49.033604 | orchestrator | 2026-04-09 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:52.084604 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:52.085646 | orchestrator | 2026-04-09 01:06:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:52.085768 | orchestrator | 2026-04-09 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:55.133176 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:55.135528 | orchestrator | 2026-04-09 01:06:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:55.135639 | orchestrator | 2026-04-09 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:06:58.177324 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:06:58.179290 | orchestrator | 2026-04-09 01:06:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:06:58.179495 | orchestrator | 2026-04-09 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:01.218342 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:01.219941 | orchestrator | 2026-04-09 01:07:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:01.219983 | orchestrator | 2026-04-09 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:04.262883 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:04.265810 | orchestrator | 2026-04-09 01:07:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:04.265902 | orchestrator | 2026-04-09 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:07.307558 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:07.308569 | orchestrator | 2026-04-09 01:07:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:07.308613 | orchestrator | 2026-04-09 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:10.355440 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:10.357629 | orchestrator | 2026-04-09 01:07:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:10.357685 | orchestrator | 2026-04-09 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:13.400990 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:13.402876 | orchestrator | 2026-04-09 01:07:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:13.402918 | orchestrator | 2026-04-09 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:16.442605 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:16.444463 | orchestrator | 2026-04-09 01:07:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:16.444505 | orchestrator | 2026-04-09 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:19.487604 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:19.489953 | orchestrator | 2026-04-09 01:07:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:19.489991 | orchestrator | 2026-04-09 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:22.536162 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:22.538346 | orchestrator | 2026-04-09 01:07:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:22.538466 | orchestrator | 2026-04-09 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:25.578898 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:25.578987 | orchestrator | 2026-04-09 01:07:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:25.579122 | orchestrator | 2026-04-09 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:28.626173 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:28.627242 | orchestrator | 2026-04-09 01:07:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:28.627531 | orchestrator | 2026-04-09 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:31.665230 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:31.666536 | orchestrator | 2026-04-09 01:07:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:31.666598 | orchestrator | 2026-04-09 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:34.706297 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:34.708419 | orchestrator | 2026-04-09 01:07:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:34.708501 | orchestrator | 2026-04-09 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:37.761057 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:37.763029 | orchestrator | 2026-04-09 01:07:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:37.763090 | orchestrator | 2026-04-09 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:40.801628 | orchestrator | 2026-04-09 01:07:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:40.803459 | orchestrator | 2026-04-09 01:07:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:40.803545 | orchestrator | 2026-04-09 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:43.856256 | orchestrator | 2026-04-09 01:07:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:43.858404 | orchestrator | 2026-04-09 01:07:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:43.858443 | orchestrator | 2026-04-09 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:46.903800 | orchestrator | 2026-04-09 01:07:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:46.906113 | orchestrator | 2026-04-09 01:07:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:46.906332 | orchestrator | 2026-04-09 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:49.953857 | orchestrator | 2026-04-09 01:07:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:49.955606 | orchestrator | 2026-04-09 01:07:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:49.955650 | orchestrator | 2026-04-09 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:53.010492 | orchestrator | 2026-04-09 01:07:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:53.013384 | orchestrator | 2026-04-09 01:07:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:53.013475 | orchestrator | 2026-04-09 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:56.058479 | orchestrator | 2026-04-09 01:07:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:56.060277 | orchestrator | 2026-04-09 01:07:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:56.060348 | orchestrator | 2026-04-09 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:07:59.106556 | orchestrator | 2026-04-09 01:07:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:07:59.108063 | orchestrator | 2026-04-09 01:07:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:07:59.108102 | orchestrator | 2026-04-09 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:02.157672 | orchestrator | 2026-04-09 01:08:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:02.157741 | orchestrator | 2026-04-09 01:08:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:02.157750 | orchestrator | 2026-04-09 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:05.197390 | orchestrator | 2026-04-09 01:08:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:05.200052 | orchestrator | 2026-04-09 01:08:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:05.200138 | orchestrator | 2026-04-09 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:08.248219 | orchestrator | 2026-04-09 01:08:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:08.251119 | orchestrator | 2026-04-09 01:08:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:08.251234 | orchestrator | 2026-04-09 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:11.299775 | orchestrator | 2026-04-09 01:08:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:11.301227 | orchestrator | 2026-04-09 01:08:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:11.301276 | orchestrator | 2026-04-09 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:14.347671 | orchestrator | 2026-04-09 01:08:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:14.348825 | orchestrator | 2026-04-09 01:08:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:14.348895 | orchestrator | 2026-04-09 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:17.393135 | orchestrator | 2026-04-09 01:08:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:17.396175 | orchestrator | 2026-04-09 01:08:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:17.396228 | orchestrator | 2026-04-09 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:20.442321 | orchestrator | 2026-04-09 01:08:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:20.443652 | orchestrator | 2026-04-09 01:08:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:20.443692 | orchestrator | 2026-04-09 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:23.490831 | orchestrator | 2026-04-09 01:08:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:23.491611 | orchestrator | 2026-04-09 01:08:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:23.491704 | orchestrator | 2026-04-09 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:26.530104 | orchestrator | 2026-04-09 01:08:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:26.531624 | orchestrator | 2026-04-09 01:08:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:26.531726 | orchestrator | 2026-04-09 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:29.578928 | orchestrator | 2026-04-09 01:08:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:29.580312 | orchestrator | 2026-04-09 01:08:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:29.580397 | orchestrator | 2026-04-09 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:32.625994 | orchestrator | 2026-04-09 01:08:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:32.628564 | orchestrator | 2026-04-09 01:08:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:32.628624 | orchestrator | 2026-04-09 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:35.671946 | orchestrator | 2026-04-09 01:08:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:35.673903 | orchestrator | 2026-04-09 01:08:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:35.674105 | orchestrator | 2026-04-09 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:38.715275 | orchestrator | 2026-04-09 01:08:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:38.715927 | orchestrator | 2026-04-09 01:08:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:38.716010 | orchestrator | 2026-04-09 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:41.755776 | orchestrator | 2026-04-09 01:08:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:41.758364 | orchestrator | 2026-04-09 01:08:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:41.758426 | orchestrator | 2026-04-09 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:44.799461 | orchestrator | 2026-04-09 01:08:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:44.801941 | orchestrator | 2026-04-09 01:08:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:44.802101 | orchestrator | 2026-04-09 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:47.844787 | orchestrator | 2026-04-09 01:08:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:47.846533 | orchestrator | 2026-04-09 01:08:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:47.846670 | orchestrator | 2026-04-09 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:50.896932 | orchestrator | 2026-04-09 01:08:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:50.899307 | orchestrator | 2026-04-09 01:08:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:50.899378 | orchestrator | 2026-04-09 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:53.954065 | orchestrator | 2026-04-09 01:08:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:53.955377 | orchestrator | 2026-04-09 01:08:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:53.955422 | orchestrator | 2026-04-09 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:08:57.011127 | orchestrator | 2026-04-09 01:08:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:08:57.012741 | orchestrator | 2026-04-09 01:08:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:08:57.012773 | orchestrator | 2026-04-09 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:00.060597 | orchestrator | 2026-04-09 01:09:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:00.062281 | orchestrator | 2026-04-09 01:09:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:00.062316 | orchestrator | 2026-04-09 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:03.107111 | orchestrator | 2026-04-09 01:09:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:03.108439 | orchestrator | 2026-04-09 01:09:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:03.108481 | orchestrator | 2026-04-09 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:06.149467 | orchestrator | 2026-04-09 01:09:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:06.151724 | orchestrator | 2026-04-09 01:09:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:06.151922 | orchestrator | 2026-04-09 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:09.201178 | orchestrator | 2026-04-09 01:09:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:09.203248 | orchestrator | 2026-04-09 01:09:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:09.203738 | orchestrator | 2026-04-09 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:12.247368 | orchestrator | 2026-04-09 01:09:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:12.248750 | orchestrator | 2026-04-09 01:09:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:12.248819 | orchestrator | 2026-04-09 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:15.292547 | orchestrator | 2026-04-09 01:09:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:15.294085 | orchestrator | 2026-04-09 01:09:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:15.294186 | orchestrator | 2026-04-09 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:18.339308 | orchestrator | 2026-04-09 01:09:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:18.340652 | orchestrator | 2026-04-09 01:09:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:18.340687 | orchestrator | 2026-04-09 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:21.388908 | orchestrator | 2026-04-09 01:09:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:21.390495 | orchestrator | 2026-04-09 01:09:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:21.390571 | orchestrator | 2026-04-09 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:24.438778 | orchestrator | 2026-04-09 01:09:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:24.440069 | orchestrator | 2026-04-09 01:09:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:24.440124 | orchestrator | 2026-04-09 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:27.478336 | orchestrator | 2026-04-09 01:09:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:27.480087 | orchestrator | 2026-04-09 01:09:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:27.480125 | orchestrator | 2026-04-09 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:30.521182 | orchestrator | 2026-04-09 01:09:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:30.522271 | orchestrator | 2026-04-09 01:09:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:30.522361 | orchestrator | 2026-04-09 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:33.562590 | orchestrator | 2026-04-09 01:09:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:33.564379 | orchestrator | 2026-04-09 01:09:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:33.564436 | orchestrator | 2026-04-09 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:36.605085 | orchestrator | 2026-04-09 01:09:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:36.606966 | orchestrator | 2026-04-09 01:09:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:36.607032 | orchestrator | 2026-04-09 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:39.652547 | orchestrator | 2026-04-09 01:09:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:39.654860 | orchestrator | 2026-04-09 01:09:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:39.654929 | orchestrator | 2026-04-09 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:42.705616 | orchestrator | 2026-04-09 01:09:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:42.706938 | orchestrator | 2026-04-09 01:09:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:42.706997 | orchestrator | 2026-04-09 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:45.756314 | orchestrator | 2026-04-09 01:09:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:45.757313 | orchestrator | 2026-04-09 01:09:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:45.757376 | orchestrator | 2026-04-09 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:48.802191 | orchestrator | 2026-04-09 01:09:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:48.803755 | orchestrator | 2026-04-09 01:09:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:48.803888 | orchestrator | 2026-04-09 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:51.853293 | orchestrator | 2026-04-09 01:09:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:51.855321 | orchestrator | 2026-04-09 01:09:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:51.855491 | orchestrator | 2026-04-09 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:54.900434 | orchestrator | 2026-04-09 01:09:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:54.902219 | orchestrator | 2026-04-09 01:09:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:54.902443 | orchestrator | 2026-04-09 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:09:57.947604 | orchestrator | 2026-04-09 01:09:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:09:57.949234 | orchestrator | 2026-04-09 01:09:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:09:57.949288 | orchestrator | 2026-04-09 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:00.990796 | orchestrator | 2026-04-09 01:10:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:00.991245 | orchestrator | 2026-04-09 01:10:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:00.991279 | orchestrator | 2026-04-09 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:04.031111 | orchestrator | 2026-04-09 01:10:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:04.032570 | orchestrator | 2026-04-09 01:10:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:04.032680 | orchestrator | 2026-04-09 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:07.076688 | orchestrator | 2026-04-09 01:10:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:07.078723 | orchestrator | 2026-04-09 01:10:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:07.078916 | orchestrator | 2026-04-09 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:10.123653 | orchestrator | 2026-04-09 01:10:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:10.124177 | orchestrator | 2026-04-09 01:10:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:10.124210 | orchestrator | 2026-04-09 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:13.177258 | orchestrator | 2026-04-09 01:10:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:13.180216 | orchestrator | 2026-04-09 01:10:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:13.180272 | orchestrator | 2026-04-09 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:16.225860 | orchestrator | 2026-04-09 01:10:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:16.227200 | orchestrator | 2026-04-09 01:10:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:16.227273 | orchestrator | 2026-04-09 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:19.275883 | orchestrator | 2026-04-09 01:10:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:19.278686 | orchestrator | 2026-04-09 01:10:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:19.278812 | orchestrator | 2026-04-09 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:22.323365 | orchestrator | 2026-04-09 01:10:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:22.325918 | orchestrator | 2026-04-09 01:10:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:22.325997 | orchestrator | 2026-04-09 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:25.371795 | orchestrator | 2026-04-09 01:10:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:25.374326 | orchestrator | 2026-04-09 01:10:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:25.374397 | orchestrator | 2026-04-09 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:28.420334 | orchestrator | 2026-04-09 01:10:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:28.421909 | orchestrator | 2026-04-09 01:10:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:28.421940 | orchestrator | 2026-04-09 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:31.463221 | orchestrator | 2026-04-09 01:10:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:31.464324 | orchestrator | 2026-04-09 01:10:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:31.464409 | orchestrator | 2026-04-09 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:34.514516 | orchestrator | 2026-04-09 01:10:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:34.517123 | orchestrator | 2026-04-09 01:10:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:34.517241 | orchestrator | 2026-04-09 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:37.568921 | orchestrator | 2026-04-09 01:10:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:37.572210 | orchestrator | 2026-04-09 01:10:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:37.572301 | orchestrator | 2026-04-09 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:40.622804 | orchestrator | 2026-04-09 01:10:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:40.625465 | orchestrator | 2026-04-09 01:10:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:40.625533 | orchestrator | 2026-04-09 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:43.683386 | orchestrator | 2026-04-09 01:10:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:43.685585 | orchestrator | 2026-04-09 01:10:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:43.685639 | orchestrator | 2026-04-09 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:46.740555 | orchestrator | 2026-04-09 01:10:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:46.743024 | orchestrator | 2026-04-09 01:10:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:46.743067 | orchestrator | 2026-04-09 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:49.799425 | orchestrator | 2026-04-09 01:10:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:49.801436 | orchestrator | 2026-04-09 01:10:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:49.801498 | orchestrator | 2026-04-09 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:52.853436 | orchestrator | 2026-04-09 01:10:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:52.854820 | orchestrator | 2026-04-09 01:10:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:52.854916 | orchestrator | 2026-04-09 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:55.899222 | orchestrator | 2026-04-09 01:10:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:55.901962 | orchestrator | 2026-04-09 01:10:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:55.902048 | orchestrator | 2026-04-09 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:10:58.947877 | orchestrator | 2026-04-09 01:10:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:10:58.950141 | orchestrator | 2026-04-09 01:10:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:10:58.950243 | orchestrator | 2026-04-09 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:01.997384 | orchestrator | 2026-04-09 01:11:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:02.000405 | orchestrator | 2026-04-09 01:11:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:02.000474 | orchestrator | 2026-04-09 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:05.046302 | orchestrator | 2026-04-09 01:11:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:05.048545 | orchestrator | 2026-04-09 01:11:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:05.048600 | orchestrator | 2026-04-09 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:08.098424 | orchestrator | 2026-04-09 01:11:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:08.100050 | orchestrator | 2026-04-09 01:11:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:08.100153 | orchestrator | 2026-04-09 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:11.143842 | orchestrator | 2026-04-09 01:11:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:11.148452 | orchestrator | 2026-04-09 01:11:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:11.148507 | orchestrator | 2026-04-09 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:14.196235 | orchestrator | 2026-04-09 01:11:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:14.197812 | orchestrator | 2026-04-09 01:11:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:14.197878 | orchestrator | 2026-04-09 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:17.240454 | orchestrator | 2026-04-09 01:11:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:17.242328 | orchestrator | 2026-04-09 01:11:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:17.242386 | orchestrator | 2026-04-09 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:20.285949 | orchestrator | 2026-04-09 01:11:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:20.287986 | orchestrator | 2026-04-09 01:11:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:20.288035 | orchestrator | 2026-04-09 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:23.332096 | orchestrator | 2026-04-09 01:11:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:23.334160 | orchestrator | 2026-04-09 01:11:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:23.334453 | orchestrator | 2026-04-09 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:26.383567 | orchestrator | 2026-04-09 01:11:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:26.385223 | orchestrator | 2026-04-09 01:11:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:26.385317 | orchestrator | 2026-04-09 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:29.435288 | orchestrator | 2026-04-09 01:11:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:29.437927 | orchestrator | 2026-04-09 01:11:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:29.437970 | orchestrator | 2026-04-09 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:32.480671 | orchestrator | 2026-04-09 01:11:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:32.482320 | orchestrator | 2026-04-09 01:11:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:32.482372 | orchestrator | 2026-04-09 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:35.525738 | orchestrator | 2026-04-09 01:11:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:35.527283 | orchestrator | 2026-04-09 01:11:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:35.527339 | orchestrator | 2026-04-09 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:38.569632 | orchestrator | 2026-04-09 01:11:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:38.571314 | orchestrator | 2026-04-09 01:11:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:38.571356 | orchestrator | 2026-04-09 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:41.620207 | orchestrator | 2026-04-09 01:11:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:41.621331 | orchestrator | 2026-04-09 01:11:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:41.621354 | orchestrator | 2026-04-09 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:44.673566 | orchestrator | 2026-04-09 01:11:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:44.675376 | orchestrator | 2026-04-09 01:11:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:44.675429 | orchestrator | 2026-04-09 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:47.722242 | orchestrator | 2026-04-09 01:11:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:47.725687 | orchestrator | 2026-04-09 01:11:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:47.725758 | orchestrator | 2026-04-09 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:50.768944 | orchestrator | 2026-04-09 01:11:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:50.770308 | orchestrator | 2026-04-09 01:11:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:50.770381 | orchestrator | 2026-04-09 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:53.817581 | orchestrator | 2026-04-09 01:11:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:53.819322 | orchestrator | 2026-04-09 01:11:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:53.819364 | orchestrator | 2026-04-09 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:56.864692 | orchestrator | 2026-04-09 01:11:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:56.865676 | orchestrator | 2026-04-09 01:11:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:56.865763 | orchestrator | 2026-04-09 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:11:59.912386 | orchestrator | 2026-04-09 01:11:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:11:59.913752 | orchestrator | 2026-04-09 01:11:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:11:59.913782 | orchestrator | 2026-04-09 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:02.960345 | orchestrator | 2026-04-09 01:12:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:02.962833 | orchestrator | 2026-04-09 01:12:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:02.962879 | orchestrator | 2026-04-09 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:06.015085 | orchestrator | 2026-04-09 01:12:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:06.017529 | orchestrator | 2026-04-09 01:12:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:06.017568 | orchestrator | 2026-04-09 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:09.061024 | orchestrator | 2026-04-09 01:12:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:09.063065 | orchestrator | 2026-04-09 01:12:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:09.063147 | orchestrator | 2026-04-09 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:12.114733 | orchestrator | 2026-04-09 01:12:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:12.117807 | orchestrator | 2026-04-09 01:12:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:12.117886 | orchestrator | 2026-04-09 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:15.160645 | orchestrator | 2026-04-09 01:12:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:15.162745 | orchestrator | 2026-04-09 01:12:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:15.162794 | orchestrator | 2026-04-09 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:18.206675 | orchestrator | 2026-04-09 01:12:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:18.208202 | orchestrator | 2026-04-09 01:12:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:18.208286 | orchestrator | 2026-04-09 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:21.252115 | orchestrator | 2026-04-09 01:12:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:21.254249 | orchestrator | 2026-04-09 01:12:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:21.254311 | orchestrator | 2026-04-09 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:24.299622 | orchestrator | 2026-04-09 01:12:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:24.301322 | orchestrator | 2026-04-09 01:12:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:24.301371 | orchestrator | 2026-04-09 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:27.342923 | orchestrator | 2026-04-09 01:12:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:27.345099 | orchestrator | 2026-04-09 01:12:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:27.345147 | orchestrator | 2026-04-09 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:30.391483 | orchestrator | 2026-04-09 01:12:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:30.396181 | orchestrator | 2026-04-09 01:12:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:30.396256 | orchestrator | 2026-04-09 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:33.443648 | orchestrator | 2026-04-09 01:12:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:33.445013 | orchestrator | 2026-04-09 01:12:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:33.445071 | orchestrator | 2026-04-09 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:36.487653 | orchestrator | 2026-04-09 01:12:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:36.489480 | orchestrator | 2026-04-09 01:12:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:36.489565 | orchestrator | 2026-04-09 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:39.535940 | orchestrator | 2026-04-09 01:12:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:39.537986 | orchestrator | 2026-04-09 01:12:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:39.538170 | orchestrator | 2026-04-09 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:42.593438 | orchestrator | 2026-04-09 01:12:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:42.595161 | orchestrator | 2026-04-09 01:12:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:42.595208 | orchestrator | 2026-04-09 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:45.643412 | orchestrator | 2026-04-09 01:12:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:45.644873 | orchestrator | 2026-04-09 01:12:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:45.644921 | orchestrator | 2026-04-09 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:48.691670 | orchestrator | 2026-04-09 01:12:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:48.692185 | orchestrator | 2026-04-09 01:12:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:48.692197 | orchestrator | 2026-04-09 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:51.739925 | orchestrator | 2026-04-09 01:12:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:51.741975 | orchestrator | 2026-04-09 01:12:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:51.742230 | orchestrator | 2026-04-09 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:54.785415 | orchestrator | 2026-04-09 01:12:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:54.787177 | orchestrator | 2026-04-09 01:12:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:54.787206 | orchestrator | 2026-04-09 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:12:57.829761 | orchestrator | 2026-04-09 01:12:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:12:57.830708 | orchestrator | 2026-04-09 01:12:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:12:57.830750 | orchestrator | 2026-04-09 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:00.880409 | orchestrator | 2026-04-09 01:13:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:00.881133 | orchestrator | 2026-04-09 01:13:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:00.881849 | orchestrator | 2026-04-09 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:03.922260 | orchestrator | 2026-04-09 01:13:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:03.923818 | orchestrator | 2026-04-09 01:13:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:03.923869 | orchestrator | 2026-04-09 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:06.965548 | orchestrator | 2026-04-09 01:13:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:06.967273 | orchestrator | 2026-04-09 01:13:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:06.967304 | orchestrator | 2026-04-09 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:10.012767 | orchestrator | 2026-04-09 01:13:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:10.013633 | orchestrator | 2026-04-09 01:13:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:10.013704 | orchestrator | 2026-04-09 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:13.061046 | orchestrator | 2026-04-09 01:13:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:13.063191 | orchestrator | 2026-04-09 01:13:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:13.063314 | orchestrator | 2026-04-09 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:16.105551 | orchestrator | 2026-04-09 01:13:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:16.106430 | orchestrator | 2026-04-09 01:13:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:16.106455 | orchestrator | 2026-04-09 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:19.150842 | orchestrator | 2026-04-09 01:13:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:19.152804 | orchestrator | 2026-04-09 01:13:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:19.152851 | orchestrator | 2026-04-09 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:22.203112 | orchestrator | 2026-04-09 01:13:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:22.205547 | orchestrator | 2026-04-09 01:13:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:22.206064 | orchestrator | 2026-04-09 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:25.253934 | orchestrator | 2026-04-09 01:13:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:25.256170 | orchestrator | 2026-04-09 01:13:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:25.256275 | orchestrator | 2026-04-09 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:28.306817 | orchestrator | 2026-04-09 01:13:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:28.308203 | orchestrator | 2026-04-09 01:13:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:28.308248 | orchestrator | 2026-04-09 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:31.360965 | orchestrator | 2026-04-09 01:13:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:31.362163 | orchestrator | 2026-04-09 01:13:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:31.362228 | orchestrator | 2026-04-09 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:34.413314 | orchestrator | 2026-04-09 01:13:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:34.417171 | orchestrator | 2026-04-09 01:13:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:34.417251 | orchestrator | 2026-04-09 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:37.462562 | orchestrator | 2026-04-09 01:13:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:37.464153 | orchestrator | 2026-04-09 01:13:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:37.464277 | orchestrator | 2026-04-09 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:40.512255 | orchestrator | 2026-04-09 01:13:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:40.513685 | orchestrator | 2026-04-09 01:13:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:40.513775 | orchestrator | 2026-04-09 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:43.559942 | orchestrator | 2026-04-09 01:13:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:43.562228 | orchestrator | 2026-04-09 01:13:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:43.562307 | orchestrator | 2026-04-09 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:46.607745 | orchestrator | 2026-04-09 01:13:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:46.609495 | orchestrator | 2026-04-09 01:13:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:46.609597 | orchestrator | 2026-04-09 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:49.657232 | orchestrator | 2026-04-09 01:13:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:49.659711 | orchestrator | 2026-04-09 01:13:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:49.659757 | orchestrator | 2026-04-09 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:52.708058 | orchestrator | 2026-04-09 01:13:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:52.708891 | orchestrator | 2026-04-09 01:13:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:52.708944 | orchestrator | 2026-04-09 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:55.751131 | orchestrator | 2026-04-09 01:13:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:55.753243 | orchestrator | 2026-04-09 01:13:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:55.753327 | orchestrator | 2026-04-09 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:13:58.802329 | orchestrator | 2026-04-09 01:13:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:13:58.803964 | orchestrator | 2026-04-09 01:13:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:13:58.804088 | orchestrator | 2026-04-09 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:01.847500 | orchestrator | 2026-04-09 01:14:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:01.849518 | orchestrator | 2026-04-09 01:14:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:01.849576 | orchestrator | 2026-04-09 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:04.900520 | orchestrator | 2026-04-09 01:14:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:04.902068 | orchestrator | 2026-04-09 01:14:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:04.902115 | orchestrator | 2026-04-09 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:07.950446 | orchestrator | 2026-04-09 01:14:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:07.951586 | orchestrator | 2026-04-09 01:14:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:07.951624 | orchestrator | 2026-04-09 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:10.988031 | orchestrator | 2026-04-09 01:14:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:10.988722 | orchestrator | 2026-04-09 01:14:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:10.988824 | orchestrator | 2026-04-09 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:14.038822 | orchestrator | 2026-04-09 01:14:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:14.040782 | orchestrator | 2026-04-09 01:14:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:14.040852 | orchestrator | 2026-04-09 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:17.087402 | orchestrator | 2026-04-09 01:14:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:17.089825 | orchestrator | 2026-04-09 01:14:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:17.089965 | orchestrator | 2026-04-09 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:20.140156 | orchestrator | 2026-04-09 01:14:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:20.141143 | orchestrator | 2026-04-09 01:14:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:20.141192 | orchestrator | 2026-04-09 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:23.186960 | orchestrator | 2026-04-09 01:14:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:23.188527 | orchestrator | 2026-04-09 01:14:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:23.188560 | orchestrator | 2026-04-09 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:26.230747 | orchestrator | 2026-04-09 01:14:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:26.232436 | orchestrator | 2026-04-09 01:14:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:26.232781 | orchestrator | 2026-04-09 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:29.274567 | orchestrator | 2026-04-09 01:14:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:29.276678 | orchestrator | 2026-04-09 01:14:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:29.276748 | orchestrator | 2026-04-09 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:32.317103 | orchestrator | 2026-04-09 01:14:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:32.319085 | orchestrator | 2026-04-09 01:14:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:32.319152 | orchestrator | 2026-04-09 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:35.366059 | orchestrator | 2026-04-09 01:14:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:35.368790 | orchestrator | 2026-04-09 01:14:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:35.368845 | orchestrator | 2026-04-09 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:38.414296 | orchestrator | 2026-04-09 01:14:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:38.416671 | orchestrator | 2026-04-09 01:14:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:38.416743 | orchestrator | 2026-04-09 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:41.458849 | orchestrator | 2026-04-09 01:14:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:41.460204 | orchestrator | 2026-04-09 01:14:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:41.460279 | orchestrator | 2026-04-09 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:44.506764 | orchestrator | 2026-04-09 01:14:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:44.507810 | orchestrator | 2026-04-09 01:14:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:44.507943 | orchestrator | 2026-04-09 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:47.546534 | orchestrator | 2026-04-09 01:14:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:47.548943 | orchestrator | 2026-04-09 01:14:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:47.549086 | orchestrator | 2026-04-09 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:50.597473 | orchestrator | 2026-04-09 01:14:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:50.600370 | orchestrator | 2026-04-09 01:14:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:50.600455 | orchestrator | 2026-04-09 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:53.645939 | orchestrator | 2026-04-09 01:14:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:53.647053 | orchestrator | 2026-04-09 01:14:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:53.647176 | orchestrator | 2026-04-09 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:56.687592 | orchestrator | 2026-04-09 01:14:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:56.689483 | orchestrator | 2026-04-09 01:14:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:56.689543 | orchestrator | 2026-04-09 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:14:59.736936 | orchestrator | 2026-04-09 01:14:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:14:59.739299 | orchestrator | 2026-04-09 01:14:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:14:59.739350 | orchestrator | 2026-04-09 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:02.784171 | orchestrator | 2026-04-09 01:15:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:02.785327 | orchestrator | 2026-04-09 01:15:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:02.785374 | orchestrator | 2026-04-09 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:05.840564 | orchestrator | 2026-04-09 01:15:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:05.842156 | orchestrator | 2026-04-09 01:15:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:05.842188 | orchestrator | 2026-04-09 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:08.890224 | orchestrator | 2026-04-09 01:15:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:08.892256 | orchestrator | 2026-04-09 01:15:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:08.892314 | orchestrator | 2026-04-09 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:11.938676 | orchestrator | 2026-04-09 01:15:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:11.940694 | orchestrator | 2026-04-09 01:15:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:11.940746 | orchestrator | 2026-04-09 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:14.984914 | orchestrator | 2026-04-09 01:15:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:14.987525 | orchestrator | 2026-04-09 01:15:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:14.987616 | orchestrator | 2026-04-09 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:18.031787 | orchestrator | 2026-04-09 01:15:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:18.032820 | orchestrator | 2026-04-09 01:15:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:18.032871 | orchestrator | 2026-04-09 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:21.077658 | orchestrator | 2026-04-09 01:15:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:21.078790 | orchestrator | 2026-04-09 01:15:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:21.078814 | orchestrator | 2026-04-09 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:24.117141 | orchestrator | 2026-04-09 01:15:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:24.120563 | orchestrator | 2026-04-09 01:15:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:24.120660 | orchestrator | 2026-04-09 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:27.159267 | orchestrator | 2026-04-09 01:15:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:27.161599 | orchestrator | 2026-04-09 01:15:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:27.161769 | orchestrator | 2026-04-09 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:30.211650 | orchestrator | 2026-04-09 01:15:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:30.214498 | orchestrator | 2026-04-09 01:15:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:30.214545 | orchestrator | 2026-04-09 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:33.262960 | orchestrator | 2026-04-09 01:15:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:33.265732 | orchestrator | 2026-04-09 01:15:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:33.265822 | orchestrator | 2026-04-09 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:36.315283 | orchestrator | 2026-04-09 01:15:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:36.316292 | orchestrator | 2026-04-09 01:15:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:36.316402 | orchestrator | 2026-04-09 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:39.358372 | orchestrator | 2026-04-09 01:15:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:39.359823 | orchestrator | 2026-04-09 01:15:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:39.359952 | orchestrator | 2026-04-09 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:42.408298 | orchestrator | 2026-04-09 01:15:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:42.410072 | orchestrator | 2026-04-09 01:15:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:42.410220 | orchestrator | 2026-04-09 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:45.455261 | orchestrator | 2026-04-09 01:15:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:45.457588 | orchestrator | 2026-04-09 01:15:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:45.457700 | orchestrator | 2026-04-09 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:48.502159 | orchestrator | 2026-04-09 01:15:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:48.504370 | orchestrator | 2026-04-09 01:15:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:48.504449 | orchestrator | 2026-04-09 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:51.557383 | orchestrator | 2026-04-09 01:15:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:51.559880 | orchestrator | 2026-04-09 01:15:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:51.559937 | orchestrator | 2026-04-09 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:54.609355 | orchestrator | 2026-04-09 01:15:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:54.610718 | orchestrator | 2026-04-09 01:15:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:54.610955 | orchestrator | 2026-04-09 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:15:57.659968 | orchestrator | 2026-04-09 01:15:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:15:57.661757 | orchestrator | 2026-04-09 01:15:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:15:57.661804 | orchestrator | 2026-04-09 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:00.710207 | orchestrator | 2026-04-09 01:16:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:00.712505 | orchestrator | 2026-04-09 01:16:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:00.712610 | orchestrator | 2026-04-09 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:03.756996 | orchestrator | 2026-04-09 01:16:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:03.759783 | orchestrator | 2026-04-09 01:16:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:03.759829 | orchestrator | 2026-04-09 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:06.807342 | orchestrator | 2026-04-09 01:16:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:06.808633 | orchestrator | 2026-04-09 01:16:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:06.809018 | orchestrator | 2026-04-09 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:09.853133 | orchestrator | 2026-04-09 01:16:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:09.853793 | orchestrator | 2026-04-09 01:16:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:09.853864 | orchestrator | 2026-04-09 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:12.905806 | orchestrator | 2026-04-09 01:16:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:12.908602 | orchestrator | 2026-04-09 01:16:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:12.908658 | orchestrator | 2026-04-09 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:15.955540 | orchestrator | 2026-04-09 01:16:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:15.957338 | orchestrator | 2026-04-09 01:16:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:15.957382 | orchestrator | 2026-04-09 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:19.002755 | orchestrator | 2026-04-09 01:16:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:19.004560 | orchestrator | 2026-04-09 01:16:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:19.004663 | orchestrator | 2026-04-09 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:22.053620 | orchestrator | 2026-04-09 01:16:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:22.055260 | orchestrator | 2026-04-09 01:16:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:22.055370 | orchestrator | 2026-04-09 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:25.095382 | orchestrator | 2026-04-09 01:16:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:25.097558 | orchestrator | 2026-04-09 01:16:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:25.097636 | orchestrator | 2026-04-09 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:28.139892 | orchestrator | 2026-04-09 01:16:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:28.141500 | orchestrator | 2026-04-09 01:16:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:28.141566 | orchestrator | 2026-04-09 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:31.188255 | orchestrator | 2026-04-09 01:16:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:31.190577 | orchestrator | 2026-04-09 01:16:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:31.190642 | orchestrator | 2026-04-09 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:34.229600 | orchestrator | 2026-04-09 01:16:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:34.231386 | orchestrator | 2026-04-09 01:16:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:34.231469 | orchestrator | 2026-04-09 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:37.281136 | orchestrator | 2026-04-09 01:16:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:37.282954 | orchestrator | 2026-04-09 01:16:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:37.283018 | orchestrator | 2026-04-09 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:40.325676 | orchestrator | 2026-04-09 01:16:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:40.327702 | orchestrator | 2026-04-09 01:16:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:40.327756 | orchestrator | 2026-04-09 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:43.372870 | orchestrator | 2026-04-09 01:16:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:43.375466 | orchestrator | 2026-04-09 01:16:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:43.375521 | orchestrator | 2026-04-09 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:46.425911 | orchestrator | 2026-04-09 01:16:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:46.428423 | orchestrator | 2026-04-09 01:16:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:46.428545 | orchestrator | 2026-04-09 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:49.477804 | orchestrator | 2026-04-09 01:16:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:49.479746 | orchestrator | 2026-04-09 01:16:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:49.479879 | orchestrator | 2026-04-09 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:52.526495 | orchestrator | 2026-04-09 01:16:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:52.528824 | orchestrator | 2026-04-09 01:16:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:52.528959 | orchestrator | 2026-04-09 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:55.576740 | orchestrator | 2026-04-09 01:16:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:55.578514 | orchestrator | 2026-04-09 01:16:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:55.578561 | orchestrator | 2026-04-09 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:16:58.634121 | orchestrator | 2026-04-09 01:16:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:16:58.635600 | orchestrator | 2026-04-09 01:16:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:16:58.635740 | orchestrator | 2026-04-09 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:01.681223 | orchestrator | 2026-04-09 01:17:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:01.683654 | orchestrator | 2026-04-09 01:17:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:01.683782 | orchestrator | 2026-04-09 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:04.729462 | orchestrator | 2026-04-09 01:17:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:04.730705 | orchestrator | 2026-04-09 01:17:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:04.730847 | orchestrator | 2026-04-09 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:07.779379 | orchestrator | 2026-04-09 01:17:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:07.781111 | orchestrator | 2026-04-09 01:17:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:07.781150 | orchestrator | 2026-04-09 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:10.825187 | orchestrator | 2026-04-09 01:17:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:10.827084 | orchestrator | 2026-04-09 01:17:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:10.827329 | orchestrator | 2026-04-09 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:13.876222 | orchestrator | 2026-04-09 01:17:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:13.877321 | orchestrator | 2026-04-09 01:17:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:13.877354 | orchestrator | 2026-04-09 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:16.919692 | orchestrator | 2026-04-09 01:17:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:16.921412 | orchestrator | 2026-04-09 01:17:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:16.921474 | orchestrator | 2026-04-09 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:19.963494 | orchestrator | 2026-04-09 01:17:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:19.965183 | orchestrator | 2026-04-09 01:17:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:19.965242 | orchestrator | 2026-04-09 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:23.017318 | orchestrator | 2026-04-09 01:17:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:23.019045 | orchestrator | 2026-04-09 01:17:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:23.019148 | orchestrator | 2026-04-09 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:26.061858 | orchestrator | 2026-04-09 01:17:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:26.063980 | orchestrator | 2026-04-09 01:17:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:26.064058 | orchestrator | 2026-04-09 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:29.108221 | orchestrator | 2026-04-09 01:17:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:29.109633 | orchestrator | 2026-04-09 01:17:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:29.109812 | orchestrator | 2026-04-09 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:32.155147 | orchestrator | 2026-04-09 01:17:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:32.156366 | orchestrator | 2026-04-09 01:17:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:32.156463 | orchestrator | 2026-04-09 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:35.198996 | orchestrator | 2026-04-09 01:17:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:35.200559 | orchestrator | 2026-04-09 01:17:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:35.200601 | orchestrator | 2026-04-09 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:38.246969 | orchestrator | 2026-04-09 01:17:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:38.248576 | orchestrator | 2026-04-09 01:17:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:38.248657 | orchestrator | 2026-04-09 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:41.295637 | orchestrator | 2026-04-09 01:17:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:41.298417 | orchestrator | 2026-04-09 01:17:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:41.298533 | orchestrator | 2026-04-09 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:44.348981 | orchestrator | 2026-04-09 01:17:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:44.351469 | orchestrator | 2026-04-09 01:17:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:44.351614 | orchestrator | 2026-04-09 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:47.393957 | orchestrator | 2026-04-09 01:17:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:47.395816 | orchestrator | 2026-04-09 01:17:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:47.395873 | orchestrator | 2026-04-09 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:50.443011 | orchestrator | 2026-04-09 01:17:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:50.443957 | orchestrator | 2026-04-09 01:17:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:50.443999 | orchestrator | 2026-04-09 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:53.494340 | orchestrator | 2026-04-09 01:17:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:53.495702 | orchestrator | 2026-04-09 01:17:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:53.495745 | orchestrator | 2026-04-09 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:56.536909 | orchestrator | 2026-04-09 01:17:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:56.539510 | orchestrator | 2026-04-09 01:17:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:56.539611 | orchestrator | 2026-04-09 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:17:59.589035 | orchestrator | 2026-04-09 01:17:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:17:59.591595 | orchestrator | 2026-04-09 01:17:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:17:59.591655 | orchestrator | 2026-04-09 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:02.635410 | orchestrator | 2026-04-09 01:18:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:02.637108 | orchestrator | 2026-04-09 01:18:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:02.637186 | orchestrator | 2026-04-09 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:05.684828 | orchestrator | 2026-04-09 01:18:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:05.686306 | orchestrator | 2026-04-09 01:18:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:05.686378 | orchestrator | 2026-04-09 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:08.729165 | orchestrator | 2026-04-09 01:18:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:08.730354 | orchestrator | 2026-04-09 01:18:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:08.730459 | orchestrator | 2026-04-09 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:11.778109 | orchestrator | 2026-04-09 01:18:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:11.779693 | orchestrator | 2026-04-09 01:18:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:11.779749 | orchestrator | 2026-04-09 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:14.826402 | orchestrator | 2026-04-09 01:18:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:14.827798 | orchestrator | 2026-04-09 01:18:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:14.827877 | orchestrator | 2026-04-09 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:17.870206 | orchestrator | 2026-04-09 01:18:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:17.871529 | orchestrator | 2026-04-09 01:18:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:17.871638 | orchestrator | 2026-04-09 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:20.918411 | orchestrator | 2026-04-09 01:18:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:20.919692 | orchestrator | 2026-04-09 01:18:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:20.919778 | orchestrator | 2026-04-09 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:23.966335 | orchestrator | 2026-04-09 01:18:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:23.969232 | orchestrator | 2026-04-09 01:18:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:23.969340 | orchestrator | 2026-04-09 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:27.010874 | orchestrator | 2026-04-09 01:18:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:27.012281 | orchestrator | 2026-04-09 01:18:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:27.012327 | orchestrator | 2026-04-09 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:30.053391 | orchestrator | 2026-04-09 01:18:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:30.054571 | orchestrator | 2026-04-09 01:18:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:30.054658 | orchestrator | 2026-04-09 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:33.103510 | orchestrator | 2026-04-09 01:18:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:33.105139 | orchestrator | 2026-04-09 01:18:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:33.105260 | orchestrator | 2026-04-09 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:36.152334 | orchestrator | 2026-04-09 01:18:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:36.155202 | orchestrator | 2026-04-09 01:18:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:36.155511 | orchestrator | 2026-04-09 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:39.207856 | orchestrator | 2026-04-09 01:18:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:39.209731 | orchestrator | 2026-04-09 01:18:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:39.209879 | orchestrator | 2026-04-09 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:42.258363 | orchestrator | 2026-04-09 01:18:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:42.260209 | orchestrator | 2026-04-09 01:18:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:42.260276 | orchestrator | 2026-04-09 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:45.306585 | orchestrator | 2026-04-09 01:18:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:45.307245 | orchestrator | 2026-04-09 01:18:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:45.307282 | orchestrator | 2026-04-09 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:48.356693 | orchestrator | 2026-04-09 01:18:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:48.358169 | orchestrator | 2026-04-09 01:18:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:48.358256 | orchestrator | 2026-04-09 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:51.400498 | orchestrator | 2026-04-09 01:18:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:51.401782 | orchestrator | 2026-04-09 01:18:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:51.401819 | orchestrator | 2026-04-09 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:54.452060 | orchestrator | 2026-04-09 01:18:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:54.454773 | orchestrator | 2026-04-09 01:18:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:54.454842 | orchestrator | 2026-04-09 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:18:57.496697 | orchestrator | 2026-04-09 01:18:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:18:57.499516 | orchestrator | 2026-04-09 01:18:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:18:57.499629 | orchestrator | 2026-04-09 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:00.544378 | orchestrator | 2026-04-09 01:19:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:00.546103 | orchestrator | 2026-04-09 01:19:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:00.546161 | orchestrator | 2026-04-09 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:03.592968 | orchestrator | 2026-04-09 01:19:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:03.594504 | orchestrator | 2026-04-09 01:19:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:03.594598 | orchestrator | 2026-04-09 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:06.632890 | orchestrator | 2026-04-09 01:19:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:06.633790 | orchestrator | 2026-04-09 01:19:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:06.633897 | orchestrator | 2026-04-09 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:09.681326 | orchestrator | 2026-04-09 01:19:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:09.682269 | orchestrator | 2026-04-09 01:19:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:09.682319 | orchestrator | 2026-04-09 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:12.729890 | orchestrator | 2026-04-09 01:19:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:12.731875 | orchestrator | 2026-04-09 01:19:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:12.731941 | orchestrator | 2026-04-09 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:15.788891 | orchestrator | 2026-04-09 01:19:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:15.789179 | orchestrator | 2026-04-09 01:19:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:15.789212 | orchestrator | 2026-04-09 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:18.835782 | orchestrator | 2026-04-09 01:19:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:18.837380 | orchestrator | 2026-04-09 01:19:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:18.837432 | orchestrator | 2026-04-09 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:21.884137 | orchestrator | 2026-04-09 01:19:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:21.885875 | orchestrator | 2026-04-09 01:19:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:21.886177 | orchestrator | 2026-04-09 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:24.933108 | orchestrator | 2026-04-09 01:19:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:24.934685 | orchestrator | 2026-04-09 01:19:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:24.934821 | orchestrator | 2026-04-09 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:27.978204 | orchestrator | 2026-04-09 01:19:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:27.980206 | orchestrator | 2026-04-09 01:19:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:27.980417 | orchestrator | 2026-04-09 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:31.028571 | orchestrator | 2026-04-09 01:19:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:31.030287 | orchestrator | 2026-04-09 01:19:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:31.030349 | orchestrator | 2026-04-09 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:34.078182 | orchestrator | 2026-04-09 01:19:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:34.080014 | orchestrator | 2026-04-09 01:19:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:34.080101 | orchestrator | 2026-04-09 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:37.128016 | orchestrator | 2026-04-09 01:19:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:37.129891 | orchestrator | 2026-04-09 01:19:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:37.129953 | orchestrator | 2026-04-09 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:40.176875 | orchestrator | 2026-04-09 01:19:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:40.177723 | orchestrator | 2026-04-09 01:19:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:40.177969 | orchestrator | 2026-04-09 01:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:43.223983 | orchestrator | 2026-04-09 01:19:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:43.225640 | orchestrator | 2026-04-09 01:19:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:43.225702 | orchestrator | 2026-04-09 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:46.274381 | orchestrator | 2026-04-09 01:19:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:46.276209 | orchestrator | 2026-04-09 01:19:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:46.276315 | orchestrator | 2026-04-09 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:49.322310 | orchestrator | 2026-04-09 01:19:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:49.323625 | orchestrator | 2026-04-09 01:19:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:49.323671 | orchestrator | 2026-04-09 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:52.374161 | orchestrator | 2026-04-09 01:19:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:52.376641 | orchestrator | 2026-04-09 01:19:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:52.376701 | orchestrator | 2026-04-09 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:55.423940 | orchestrator | 2026-04-09 01:19:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:55.426099 | orchestrator | 2026-04-09 01:19:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:55.426166 | orchestrator | 2026-04-09 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:19:58.474962 | orchestrator | 2026-04-09 01:19:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:19:58.476942 | orchestrator | 2026-04-09 01:19:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:19:58.477337 | orchestrator | 2026-04-09 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:01.528748 | orchestrator | 2026-04-09 01:20:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:01.530137 | orchestrator | 2026-04-09 01:20:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:01.530197 | orchestrator | 2026-04-09 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:04.579947 | orchestrator | 2026-04-09 01:20:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:04.583498 | orchestrator | 2026-04-09 01:20:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:04.585710 | orchestrator | 2026-04-09 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:07.625485 | orchestrator | 2026-04-09 01:20:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:07.626935 | orchestrator | 2026-04-09 01:20:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:07.627007 | orchestrator | 2026-04-09 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:10.671731 | orchestrator | 2026-04-09 01:20:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:10.673231 | orchestrator | 2026-04-09 01:20:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:10.673410 | orchestrator | 2026-04-09 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:13.725725 | orchestrator | 2026-04-09 01:20:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:13.727970 | orchestrator | 2026-04-09 01:20:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:13.729985 | orchestrator | 2026-04-09 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:16.778224 | orchestrator | 2026-04-09 01:20:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:16.779994 | orchestrator | 2026-04-09 01:20:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:16.780047 | orchestrator | 2026-04-09 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:19.823623 | orchestrator | 2026-04-09 01:20:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:19.825423 | orchestrator | 2026-04-09 01:20:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:19.825484 | orchestrator | 2026-04-09 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:22.873155 | orchestrator | 2026-04-09 01:20:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:22.874644 | orchestrator | 2026-04-09 01:20:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:22.874755 | orchestrator | 2026-04-09 01:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:25.919533 | orchestrator | 2026-04-09 01:20:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:25.921721 | orchestrator | 2026-04-09 01:20:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:25.921797 | orchestrator | 2026-04-09 01:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:28.967545 | orchestrator | 2026-04-09 01:20:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:28.968900 | orchestrator | 2026-04-09 01:20:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:28.968981 | orchestrator | 2026-04-09 01:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:32.012632 | orchestrator | 2026-04-09 01:20:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:32.014932 | orchestrator | 2026-04-09 01:20:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:32.015018 | orchestrator | 2026-04-09 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:35.058082 | orchestrator | 2026-04-09 01:20:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:35.060256 | orchestrator | 2026-04-09 01:20:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:35.060311 | orchestrator | 2026-04-09 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:38.104746 | orchestrator | 2026-04-09 01:20:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:38.106154 | orchestrator | 2026-04-09 01:20:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:38.106276 | orchestrator | 2026-04-09 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:41.158748 | orchestrator | 2026-04-09 01:20:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:41.159985 | orchestrator | 2026-04-09 01:20:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:41.160021 | orchestrator | 2026-04-09 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:44.213225 | orchestrator | 2026-04-09 01:20:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:44.216591 | orchestrator | 2026-04-09 01:20:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:44.216692 | orchestrator | 2026-04-09 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:47.266548 | orchestrator | 2026-04-09 01:20:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:47.268014 | orchestrator | 2026-04-09 01:20:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:47.268063 | orchestrator | 2026-04-09 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:50.314196 | orchestrator | 2026-04-09 01:20:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:50.315942 | orchestrator | 2026-04-09 01:20:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:50.316014 | orchestrator | 2026-04-09 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:53.359621 | orchestrator | 2026-04-09 01:20:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:53.361452 | orchestrator | 2026-04-09 01:20:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:53.361516 | orchestrator | 2026-04-09 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:56.411452 | orchestrator | 2026-04-09 01:20:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:56.413032 | orchestrator | 2026-04-09 01:20:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:56.413061 | orchestrator | 2026-04-09 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:20:59.461471 | orchestrator | 2026-04-09 01:20:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:20:59.463231 | orchestrator | 2026-04-09 01:20:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:20:59.463318 | orchestrator | 2026-04-09 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:02.508835 | orchestrator | 2026-04-09 01:21:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:02.511270 | orchestrator | 2026-04-09 01:21:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:02.511639 | orchestrator | 2026-04-09 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:05.555465 | orchestrator | 2026-04-09 01:21:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:05.557407 | orchestrator | 2026-04-09 01:21:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:05.557596 | orchestrator | 2026-04-09 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:08.604668 | orchestrator | 2026-04-09 01:21:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:08.606233 | orchestrator | 2026-04-09 01:21:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:08.606296 | orchestrator | 2026-04-09 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:11.656159 | orchestrator | 2026-04-09 01:21:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:11.658190 | orchestrator | 2026-04-09 01:21:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:11.658233 | orchestrator | 2026-04-09 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:14.706266 | orchestrator | 2026-04-09 01:21:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:14.709570 | orchestrator | 2026-04-09 01:21:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:14.709655 | orchestrator | 2026-04-09 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:17.750475 | orchestrator | 2026-04-09 01:21:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:17.752525 | orchestrator | 2026-04-09 01:21:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:17.752629 | orchestrator | 2026-04-09 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:20.798549 | orchestrator | 2026-04-09 01:21:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:20.799867 | orchestrator | 2026-04-09 01:21:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:20.799924 | orchestrator | 2026-04-09 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:23.846252 | orchestrator | 2026-04-09 01:21:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:23.848080 | orchestrator | 2026-04-09 01:21:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:23.848144 | orchestrator | 2026-04-09 01:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:26.894356 | orchestrator | 2026-04-09 01:21:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:26.897152 | orchestrator | 2026-04-09 01:21:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:26.897289 | orchestrator | 2026-04-09 01:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:29.944518 | orchestrator | 2026-04-09 01:21:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:29.946262 | orchestrator | 2026-04-09 01:21:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:29.946336 | orchestrator | 2026-04-09 01:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:32.994291 | orchestrator | 2026-04-09 01:21:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:32.996505 | orchestrator | 2026-04-09 01:21:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:32.996625 | orchestrator | 2026-04-09 01:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:36.050776 | orchestrator | 2026-04-09 01:21:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:36.051814 | orchestrator | 2026-04-09 01:21:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:36.051844 | orchestrator | 2026-04-09 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:39.100253 | orchestrator | 2026-04-09 01:21:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:39.102186 | orchestrator | 2026-04-09 01:21:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:39.102273 | orchestrator | 2026-04-09 01:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:42.147585 | orchestrator | 2026-04-09 01:21:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:42.150681 | orchestrator | 2026-04-09 01:21:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:42.150780 | orchestrator | 2026-04-09 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:45.191168 | orchestrator | 2026-04-09 01:21:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:45.193507 | orchestrator | 2026-04-09 01:21:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:45.193574 | orchestrator | 2026-04-09 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:48.238890 | orchestrator | 2026-04-09 01:21:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:48.240575 | orchestrator | 2026-04-09 01:21:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:48.240618 | orchestrator | 2026-04-09 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:51.292373 | orchestrator | 2026-04-09 01:21:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:51.295180 | orchestrator | 2026-04-09 01:21:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:51.295266 | orchestrator | 2026-04-09 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:54.341796 | orchestrator | 2026-04-09 01:21:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:54.344982 | orchestrator | 2026-04-09 01:21:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:54.345183 | orchestrator | 2026-04-09 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:21:57.392392 | orchestrator | 2026-04-09 01:21:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:21:57.393663 | orchestrator | 2026-04-09 01:21:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:21:57.393712 | orchestrator | 2026-04-09 01:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:00.438449 | orchestrator | 2026-04-09 01:22:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:00.439794 | orchestrator | 2026-04-09 01:22:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:00.439851 | orchestrator | 2026-04-09 01:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:03.482480 | orchestrator | 2026-04-09 01:22:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:03.484638 | orchestrator | 2026-04-09 01:22:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:03.484727 | orchestrator | 2026-04-09 01:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:06.534445 | orchestrator | 2026-04-09 01:22:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:06.536428 | orchestrator | 2026-04-09 01:22:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:06.536503 | orchestrator | 2026-04-09 01:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:09.585756 | orchestrator | 2026-04-09 01:22:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:09.587774 | orchestrator | 2026-04-09 01:22:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:09.587851 | orchestrator | 2026-04-09 01:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:12.641494 | orchestrator | 2026-04-09 01:22:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:12.643633 | orchestrator | 2026-04-09 01:22:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:12.643716 | orchestrator | 2026-04-09 01:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:15.687860 | orchestrator | 2026-04-09 01:22:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:15.689533 | orchestrator | 2026-04-09 01:22:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:15.689564 | orchestrator | 2026-04-09 01:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:18.732522 | orchestrator | 2026-04-09 01:22:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:18.735711 | orchestrator | 2026-04-09 01:22:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:18.735795 | orchestrator | 2026-04-09 01:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:21.779626 | orchestrator | 2026-04-09 01:22:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:21.781606 | orchestrator | 2026-04-09 01:22:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:21.781668 | orchestrator | 2026-04-09 01:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:24.825816 | orchestrator | 2026-04-09 01:22:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:24.827178 | orchestrator | 2026-04-09 01:22:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:24.827247 | orchestrator | 2026-04-09 01:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:27.872898 | orchestrator | 2026-04-09 01:22:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:27.874221 | orchestrator | 2026-04-09 01:22:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:27.874296 | orchestrator | 2026-04-09 01:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:30.916436 | orchestrator | 2026-04-09 01:22:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:30.918605 | orchestrator | 2026-04-09 01:22:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:30.918672 | orchestrator | 2026-04-09 01:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:33.966141 | orchestrator | 2026-04-09 01:22:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:33.967886 | orchestrator | 2026-04-09 01:22:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:33.967930 | orchestrator | 2026-04-09 01:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:37.013918 | orchestrator | 2026-04-09 01:22:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:37.015614 | orchestrator | 2026-04-09 01:22:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:37.015755 | orchestrator | 2026-04-09 01:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:40.061922 | orchestrator | 2026-04-09 01:22:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:40.063291 | orchestrator | 2026-04-09 01:22:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:40.063331 | orchestrator | 2026-04-09 01:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:43.106344 | orchestrator | 2026-04-09 01:22:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:43.107865 | orchestrator | 2026-04-09 01:22:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:43.107926 | orchestrator | 2026-04-09 01:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:46.147581 | orchestrator | 2026-04-09 01:22:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:46.147787 | orchestrator | 2026-04-09 01:22:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:46.147810 | orchestrator | 2026-04-09 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:49.188817 | orchestrator | 2026-04-09 01:22:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:49.190371 | orchestrator | 2026-04-09 01:22:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:49.190463 | orchestrator | 2026-04-09 01:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:52.236276 | orchestrator | 2026-04-09 01:22:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:52.239092 | orchestrator | 2026-04-09 01:22:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:52.239142 | orchestrator | 2026-04-09 01:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:55.285604 | orchestrator | 2026-04-09 01:22:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:55.287670 | orchestrator | 2026-04-09 01:22:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:55.287734 | orchestrator | 2026-04-09 01:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:22:58.329150 | orchestrator | 2026-04-09 01:22:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:22:58.331495 | orchestrator | 2026-04-09 01:22:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:22:58.331567 | orchestrator | 2026-04-09 01:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:01.379260 | orchestrator | 2026-04-09 01:23:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:01.380807 | orchestrator | 2026-04-09 01:23:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:01.380902 | orchestrator | 2026-04-09 01:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:04.422540 | orchestrator | 2026-04-09 01:23:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:04.424749 | orchestrator | 2026-04-09 01:23:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:04.424876 | orchestrator | 2026-04-09 01:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:07.474258 | orchestrator | 2026-04-09 01:23:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:07.477984 | orchestrator | 2026-04-09 01:23:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:07.478070 | orchestrator | 2026-04-09 01:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:10.518539 | orchestrator | 2026-04-09 01:23:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:10.520378 | orchestrator | 2026-04-09 01:23:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:10.520457 | orchestrator | 2026-04-09 01:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:13.564865 | orchestrator | 2026-04-09 01:23:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:13.566806 | orchestrator | 2026-04-09 01:23:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:13.566875 | orchestrator | 2026-04-09 01:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:16.613873 | orchestrator | 2026-04-09 01:23:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:16.615805 | orchestrator | 2026-04-09 01:23:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:16.615871 | orchestrator | 2026-04-09 01:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:19.660913 | orchestrator | 2026-04-09 01:23:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:19.662748 | orchestrator | 2026-04-09 01:23:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:19.662846 | orchestrator | 2026-04-09 01:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:22.707457 | orchestrator | 2026-04-09 01:23:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:22.708748 | orchestrator | 2026-04-09 01:23:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:22.708837 | orchestrator | 2026-04-09 01:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:25.757275 | orchestrator | 2026-04-09 01:23:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:25.758597 | orchestrator | 2026-04-09 01:23:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:25.758640 | orchestrator | 2026-04-09 01:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:28.802984 | orchestrator | 2026-04-09 01:23:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:28.804876 | orchestrator | 2026-04-09 01:23:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:28.804907 | orchestrator | 2026-04-09 01:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:31.850890 | orchestrator | 2026-04-09 01:23:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:31.853830 | orchestrator | 2026-04-09 01:23:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:31.853898 | orchestrator | 2026-04-09 01:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:34.894989 | orchestrator | 2026-04-09 01:23:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:34.896796 | orchestrator | 2026-04-09 01:23:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:34.896831 | orchestrator | 2026-04-09 01:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:37.945020 | orchestrator | 2026-04-09 01:23:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:37.946990 | orchestrator | 2026-04-09 01:23:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:37.947064 | orchestrator | 2026-04-09 01:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:40.996570 | orchestrator | 2026-04-09 01:23:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:40.998062 | orchestrator | 2026-04-09 01:23:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:40.998239 | orchestrator | 2026-04-09 01:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:44.043237 | orchestrator | 2026-04-09 01:23:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:44.044582 | orchestrator | 2026-04-09 01:23:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:44.044690 | orchestrator | 2026-04-09 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:47.091663 | orchestrator | 2026-04-09 01:23:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:47.093033 | orchestrator | 2026-04-09 01:23:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:47.093131 | orchestrator | 2026-04-09 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:50.139329 | orchestrator | 2026-04-09 01:23:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:50.139930 | orchestrator | 2026-04-09 01:23:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:50.140141 | orchestrator | 2026-04-09 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:53.183964 | orchestrator | 2026-04-09 01:23:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:53.185939 | orchestrator | 2026-04-09 01:23:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:53.185985 | orchestrator | 2026-04-09 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:56.234363 | orchestrator | 2026-04-09 01:23:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:56.237277 | orchestrator | 2026-04-09 01:23:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:56.237347 | orchestrator | 2026-04-09 01:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:23:59.283133 | orchestrator | 2026-04-09 01:23:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:23:59.284746 | orchestrator | 2026-04-09 01:23:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:23:59.284807 | orchestrator | 2026-04-09 01:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:02.332440 | orchestrator | 2026-04-09 01:24:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:02.334193 | orchestrator | 2026-04-09 01:24:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:02.334362 | orchestrator | 2026-04-09 01:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:05.382611 | orchestrator | 2026-04-09 01:24:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:05.383393 | orchestrator | 2026-04-09 01:24:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:05.383508 | orchestrator | 2026-04-09 01:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:08.429329 | orchestrator | 2026-04-09 01:24:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:08.430369 | orchestrator | 2026-04-09 01:24:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:08.430423 | orchestrator | 2026-04-09 01:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:11.478973 | orchestrator | 2026-04-09 01:24:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:11.480812 | orchestrator | 2026-04-09 01:24:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:11.480867 | orchestrator | 2026-04-09 01:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:14.518452 | orchestrator | 2026-04-09 01:24:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:14.519475 | orchestrator | 2026-04-09 01:24:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:14.519519 | orchestrator | 2026-04-09 01:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:17.563467 | orchestrator | 2026-04-09 01:24:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:17.565340 | orchestrator | 2026-04-09 01:24:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:17.565750 | orchestrator | 2026-04-09 01:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:20.610497 | orchestrator | 2026-04-09 01:24:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:20.612110 | orchestrator | 2026-04-09 01:24:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:20.612140 | orchestrator | 2026-04-09 01:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:23.655374 | orchestrator | 2026-04-09 01:24:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:23.657511 | orchestrator | 2026-04-09 01:24:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:23.657562 | orchestrator | 2026-04-09 01:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:26.703564 | orchestrator | 2026-04-09 01:24:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:26.705963 | orchestrator | 2026-04-09 01:24:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:26.706146 | orchestrator | 2026-04-09 01:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:29.751102 | orchestrator | 2026-04-09 01:24:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:29.752577 | orchestrator | 2026-04-09 01:24:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:29.752635 | orchestrator | 2026-04-09 01:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:32.794163 | orchestrator | 2026-04-09 01:24:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:32.797088 | orchestrator | 2026-04-09 01:24:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:32.797137 | orchestrator | 2026-04-09 01:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:35.849367 | orchestrator | 2026-04-09 01:24:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:35.850820 | orchestrator | 2026-04-09 01:24:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:35.850895 | orchestrator | 2026-04-09 01:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:38.895888 | orchestrator | 2026-04-09 01:24:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:38.897954 | orchestrator | 2026-04-09 01:24:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:38.898103 | orchestrator | 2026-04-09 01:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:41.948907 | orchestrator | 2026-04-09 01:24:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:41.952650 | orchestrator | 2026-04-09 01:24:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:41.952725 | orchestrator | 2026-04-09 01:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:44.999941 | orchestrator | 2026-04-09 01:24:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:45.001610 | orchestrator | 2026-04-09 01:24:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:45.001730 | orchestrator | 2026-04-09 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:48.042287 | orchestrator | 2026-04-09 01:24:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:48.042826 | orchestrator | 2026-04-09 01:24:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:48.042858 | orchestrator | 2026-04-09 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:51.092985 | orchestrator | 2026-04-09 01:24:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:51.095505 | orchestrator | 2026-04-09 01:24:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:51.095574 | orchestrator | 2026-04-09 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:54.143958 | orchestrator | 2026-04-09 01:24:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:54.145109 | orchestrator | 2026-04-09 01:24:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:54.145148 | orchestrator | 2026-04-09 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:24:57.181397 | orchestrator | 2026-04-09 01:24:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:24:57.182389 | orchestrator | 2026-04-09 01:24:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:24:57.182451 | orchestrator | 2026-04-09 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:00.225747 | orchestrator | 2026-04-09 01:25:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:00.226943 | orchestrator | 2026-04-09 01:25:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:00.227006 | orchestrator | 2026-04-09 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:03.269546 | orchestrator | 2026-04-09 01:25:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:03.271389 | orchestrator | 2026-04-09 01:25:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:03.271446 | orchestrator | 2026-04-09 01:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:06.320163 | orchestrator | 2026-04-09 01:25:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:06.322236 | orchestrator | 2026-04-09 01:25:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:06.322285 | orchestrator | 2026-04-09 01:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:09.368941 | orchestrator | 2026-04-09 01:25:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:09.371088 | orchestrator | 2026-04-09 01:25:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:09.371158 | orchestrator | 2026-04-09 01:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:12.417611 | orchestrator | 2026-04-09 01:25:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:12.419704 | orchestrator | 2026-04-09 01:25:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:12.419861 | orchestrator | 2026-04-09 01:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:15.468021 | orchestrator | 2026-04-09 01:25:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:15.469405 | orchestrator | 2026-04-09 01:25:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:15.469476 | orchestrator | 2026-04-09 01:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:18.511416 | orchestrator | 2026-04-09 01:25:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:18.513205 | orchestrator | 2026-04-09 01:25:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:18.513346 | orchestrator | 2026-04-09 01:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:21.556470 | orchestrator | 2026-04-09 01:25:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:21.557900 | orchestrator | 2026-04-09 01:25:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:21.558185 | orchestrator | 2026-04-09 01:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:24.598809 | orchestrator | 2026-04-09 01:25:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:24.600673 | orchestrator | 2026-04-09 01:25:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:24.600743 | orchestrator | 2026-04-09 01:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:27.641825 | orchestrator | 2026-04-09 01:25:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:27.643683 | orchestrator | 2026-04-09 01:25:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:27.643840 | orchestrator | 2026-04-09 01:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:30.689565 | orchestrator | 2026-04-09 01:25:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:30.693196 | orchestrator | 2026-04-09 01:25:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:30.693295 | orchestrator | 2026-04-09 01:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:33.740219 | orchestrator | 2026-04-09 01:25:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:33.741856 | orchestrator | 2026-04-09 01:25:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:33.741919 | orchestrator | 2026-04-09 01:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:36.789490 | orchestrator | 2026-04-09 01:25:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:36.791205 | orchestrator | 2026-04-09 01:25:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:36.791269 | orchestrator | 2026-04-09 01:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:39.835764 | orchestrator | 2026-04-09 01:25:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:39.837133 | orchestrator | 2026-04-09 01:25:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:39.837188 | orchestrator | 2026-04-09 01:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:42.883941 | orchestrator | 2026-04-09 01:25:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:42.886497 | orchestrator | 2026-04-09 01:25:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:42.886563 | orchestrator | 2026-04-09 01:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:45.933630 | orchestrator | 2026-04-09 01:25:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:45.935178 | orchestrator | 2026-04-09 01:25:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:45.935268 | orchestrator | 2026-04-09 01:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:48.979991 | orchestrator | 2026-04-09 01:25:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:48.982118 | orchestrator | 2026-04-09 01:25:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:48.982173 | orchestrator | 2026-04-09 01:25:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:52.026276 | orchestrator | 2026-04-09 01:25:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:52.027887 | orchestrator | 2026-04-09 01:25:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:52.027946 | orchestrator | 2026-04-09 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:55.072058 | orchestrator | 2026-04-09 01:25:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:55.073659 | orchestrator | 2026-04-09 01:25:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:55.073810 | orchestrator | 2026-04-09 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:25:58.123311 | orchestrator | 2026-04-09 01:25:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:25:58.124273 | orchestrator | 2026-04-09 01:25:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:25:58.124303 | orchestrator | 2026-04-09 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:01.166217 | orchestrator | 2026-04-09 01:26:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:01.168282 | orchestrator | 2026-04-09 01:26:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:01.168466 | orchestrator | 2026-04-09 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:04.219436 | orchestrator | 2026-04-09 01:26:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:04.221772 | orchestrator | 2026-04-09 01:26:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:04.221853 | orchestrator | 2026-04-09 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:07.268225 | orchestrator | 2026-04-09 01:26:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:07.269133 | orchestrator | 2026-04-09 01:26:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:07.269166 | orchestrator | 2026-04-09 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:10.312409 | orchestrator | 2026-04-09 01:26:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:10.314439 | orchestrator | 2026-04-09 01:26:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:10.314604 | orchestrator | 2026-04-09 01:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:13.362867 | orchestrator | 2026-04-09 01:26:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:13.365242 | orchestrator | 2026-04-09 01:26:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:13.365729 | orchestrator | 2026-04-09 01:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:16.411636 | orchestrator | 2026-04-09 01:26:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:16.413214 | orchestrator | 2026-04-09 01:26:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:16.413325 | orchestrator | 2026-04-09 01:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:19.454182 | orchestrator | 2026-04-09 01:26:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:19.455493 | orchestrator | 2026-04-09 01:26:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:19.455561 | orchestrator | 2026-04-09 01:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:22.499041 | orchestrator | 2026-04-09 01:26:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:22.501249 | orchestrator | 2026-04-09 01:26:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:22.501314 | orchestrator | 2026-04-09 01:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:25.548681 | orchestrator | 2026-04-09 01:26:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:25.551372 | orchestrator | 2026-04-09 01:26:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:25.551421 | orchestrator | 2026-04-09 01:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:28.598416 | orchestrator | 2026-04-09 01:26:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:28.599741 | orchestrator | 2026-04-09 01:26:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:28.599803 | orchestrator | 2026-04-09 01:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:31.650430 | orchestrator | 2026-04-09 01:26:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:31.652065 | orchestrator | 2026-04-09 01:26:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:31.652128 | orchestrator | 2026-04-09 01:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:34.698574 | orchestrator | 2026-04-09 01:26:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:34.700631 | orchestrator | 2026-04-09 01:26:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:34.700717 | orchestrator | 2026-04-09 01:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:37.748752 | orchestrator | 2026-04-09 01:26:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:37.750128 | orchestrator | 2026-04-09 01:26:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:37.750183 | orchestrator | 2026-04-09 01:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:40.801769 | orchestrator | 2026-04-09 01:26:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:40.803141 | orchestrator | 2026-04-09 01:26:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:40.803245 | orchestrator | 2026-04-09 01:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:43.841561 | orchestrator | 2026-04-09 01:26:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:43.844055 | orchestrator | 2026-04-09 01:26:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:43.844114 | orchestrator | 2026-04-09 01:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:46.896517 | orchestrator | 2026-04-09 01:26:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:46.899600 | orchestrator | 2026-04-09 01:26:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:46.899663 | orchestrator | 2026-04-09 01:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:49.943228 | orchestrator | 2026-04-09 01:26:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:49.945231 | orchestrator | 2026-04-09 01:26:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:49.945272 | orchestrator | 2026-04-09 01:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:52.994504 | orchestrator | 2026-04-09 01:26:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:52.996677 | orchestrator | 2026-04-09 01:26:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:52.996735 | orchestrator | 2026-04-09 01:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:56.046348 | orchestrator | 2026-04-09 01:26:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:56.047726 | orchestrator | 2026-04-09 01:26:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:56.048008 | orchestrator | 2026-04-09 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:26:59.096191 | orchestrator | 2026-04-09 01:26:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:26:59.097161 | orchestrator | 2026-04-09 01:26:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:26:59.097221 | orchestrator | 2026-04-09 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:02.142917 | orchestrator | 2026-04-09 01:27:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:02.144330 | orchestrator | 2026-04-09 01:27:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:02.144362 | orchestrator | 2026-04-09 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:05.185453 | orchestrator | 2026-04-09 01:27:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:05.187449 | orchestrator | 2026-04-09 01:27:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:05.187543 | orchestrator | 2026-04-09 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:08.238833 | orchestrator | 2026-04-09 01:27:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:08.240887 | orchestrator | 2026-04-09 01:27:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:08.241045 | orchestrator | 2026-04-09 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:11.291139 | orchestrator | 2026-04-09 01:27:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:11.293034 | orchestrator | 2026-04-09 01:27:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:11.293155 | orchestrator | 2026-04-09 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:14.340811 | orchestrator | 2026-04-09 01:27:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:14.343101 | orchestrator | 2026-04-09 01:27:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:14.343174 | orchestrator | 2026-04-09 01:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:17.397544 | orchestrator | 2026-04-09 01:27:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:17.398456 | orchestrator | 2026-04-09 01:27:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:17.398506 | orchestrator | 2026-04-09 01:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:20.461697 | orchestrator | 2026-04-09 01:27:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:20.464376 | orchestrator | 2026-04-09 01:27:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:20.464458 | orchestrator | 2026-04-09 01:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:23.505790 | orchestrator | 2026-04-09 01:27:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:23.507600 | orchestrator | 2026-04-09 01:27:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:23.508007 | orchestrator | 2026-04-09 01:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:26.554273 | orchestrator | 2026-04-09 01:27:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:26.555626 | orchestrator | 2026-04-09 01:27:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:26.555665 | orchestrator | 2026-04-09 01:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:29.608176 | orchestrator | 2026-04-09 01:27:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:29.610952 | orchestrator | 2026-04-09 01:27:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:29.611075 | orchestrator | 2026-04-09 01:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:32.659537 | orchestrator | 2026-04-09 01:27:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:32.660765 | orchestrator | 2026-04-09 01:27:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:32.660824 | orchestrator | 2026-04-09 01:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:35.707486 | orchestrator | 2026-04-09 01:27:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:35.708749 | orchestrator | 2026-04-09 01:27:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:35.708805 | orchestrator | 2026-04-09 01:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:38.754752 | orchestrator | 2026-04-09 01:27:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:38.756348 | orchestrator | 2026-04-09 01:27:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:38.756396 | orchestrator | 2026-04-09 01:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:41.804822 | orchestrator | 2026-04-09 01:27:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:41.806542 | orchestrator | 2026-04-09 01:27:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:41.806617 | orchestrator | 2026-04-09 01:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:44.845672 | orchestrator | 2026-04-09 01:27:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:44.847341 | orchestrator | 2026-04-09 01:27:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:44.847390 | orchestrator | 2026-04-09 01:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:47.896060 | orchestrator | 2026-04-09 01:27:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:47.897758 | orchestrator | 2026-04-09 01:27:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:47.897819 | orchestrator | 2026-04-09 01:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:50.941325 | orchestrator | 2026-04-09 01:27:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:50.944887 | orchestrator | 2026-04-09 01:27:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:50.944968 | orchestrator | 2026-04-09 01:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:53.995141 | orchestrator | 2026-04-09 01:27:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:53.998183 | orchestrator | 2026-04-09 01:27:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:53.998262 | orchestrator | 2026-04-09 01:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:27:57.052921 | orchestrator | 2026-04-09 01:27:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:27:57.055484 | orchestrator | 2026-04-09 01:27:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:27:57.055533 | orchestrator | 2026-04-09 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:00.099479 | orchestrator | 2026-04-09 01:28:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:00.100378 | orchestrator | 2026-04-09 01:28:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:00.100454 | orchestrator | 2026-04-09 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:03.150842 | orchestrator | 2026-04-09 01:28:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:03.152256 | orchestrator | 2026-04-09 01:28:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:03.152273 | orchestrator | 2026-04-09 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:06.206215 | orchestrator | 2026-04-09 01:28:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:06.208305 | orchestrator | 2026-04-09 01:28:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:06.208354 | orchestrator | 2026-04-09 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:09.260374 | orchestrator | 2026-04-09 01:28:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:09.262163 | orchestrator | 2026-04-09 01:28:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:09.262245 | orchestrator | 2026-04-09 01:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:12.308908 | orchestrator | 2026-04-09 01:28:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:12.310758 | orchestrator | 2026-04-09 01:28:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:12.310827 | orchestrator | 2026-04-09 01:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:15.356043 | orchestrator | 2026-04-09 01:28:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:15.357505 | orchestrator | 2026-04-09 01:28:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:15.357699 | orchestrator | 2026-04-09 01:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:18.403238 | orchestrator | 2026-04-09 01:28:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:18.405146 | orchestrator | 2026-04-09 01:28:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:18.405200 | orchestrator | 2026-04-09 01:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:21.447107 | orchestrator | 2026-04-09 01:28:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:21.450442 | orchestrator | 2026-04-09 01:28:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:21.450544 | orchestrator | 2026-04-09 01:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:24.496943 | orchestrator | 2026-04-09 01:28:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:24.498378 | orchestrator | 2026-04-09 01:28:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:24.498442 | orchestrator | 2026-04-09 01:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:27.548455 | orchestrator | 2026-04-09 01:28:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:27.548613 | orchestrator | 2026-04-09 01:28:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:27.548648 | orchestrator | 2026-04-09 01:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:30.595125 | orchestrator | 2026-04-09 01:28:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:30.597279 | orchestrator | 2026-04-09 01:28:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:30.597571 | orchestrator | 2026-04-09 01:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:33.642975 | orchestrator | 2026-04-09 01:28:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:33.644897 | orchestrator | 2026-04-09 01:28:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:33.644968 | orchestrator | 2026-04-09 01:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:36.694247 | orchestrator | 2026-04-09 01:28:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:36.695441 | orchestrator | 2026-04-09 01:28:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:36.695660 | orchestrator | 2026-04-09 01:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:39.745931 | orchestrator | 2026-04-09 01:28:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:39.747701 | orchestrator | 2026-04-09 01:28:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:39.747944 | orchestrator | 2026-04-09 01:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:42.795257 | orchestrator | 2026-04-09 01:28:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:42.796595 | orchestrator | 2026-04-09 01:28:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:42.796642 | orchestrator | 2026-04-09 01:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:45.844434 | orchestrator | 2026-04-09 01:28:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:45.846562 | orchestrator | 2026-04-09 01:28:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:45.846853 | orchestrator | 2026-04-09 01:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:48.900685 | orchestrator | 2026-04-09 01:28:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:48.901694 | orchestrator | 2026-04-09 01:28:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:48.901742 | orchestrator | 2026-04-09 01:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:51.945114 | orchestrator | 2026-04-09 01:28:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:51.946507 | orchestrator | 2026-04-09 01:28:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:51.946560 | orchestrator | 2026-04-09 01:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:54.993093 | orchestrator | 2026-04-09 01:28:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:54.994779 | orchestrator | 2026-04-09 01:28:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:54.994826 | orchestrator | 2026-04-09 01:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:28:58.047432 | orchestrator | 2026-04-09 01:28:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:28:58.050404 | orchestrator | 2026-04-09 01:28:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:28:58.050659 | orchestrator | 2026-04-09 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:01.097861 | orchestrator | 2026-04-09 01:29:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:01.098962 | orchestrator | 2026-04-09 01:29:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:01.099038 | orchestrator | 2026-04-09 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:04.137890 | orchestrator | 2026-04-09 01:29:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:04.139909 | orchestrator | 2026-04-09 01:29:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:04.140039 | orchestrator | 2026-04-09 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:07.180448 | orchestrator | 2026-04-09 01:29:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:07.183391 | orchestrator | 2026-04-09 01:29:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:07.183461 | orchestrator | 2026-04-09 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:10.226567 | orchestrator | 2026-04-09 01:29:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:10.227819 | orchestrator | 2026-04-09 01:29:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:10.227848 | orchestrator | 2026-04-09 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:13.291560 | orchestrator | 2026-04-09 01:29:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:13.293514 | orchestrator | 2026-04-09 01:29:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:13.293565 | orchestrator | 2026-04-09 01:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:16.342639 | orchestrator | 2026-04-09 01:29:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:16.345905 | orchestrator | 2026-04-09 01:29:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:16.345957 | orchestrator | 2026-04-09 01:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:19.396938 | orchestrator | 2026-04-09 01:29:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:19.399225 | orchestrator | 2026-04-09 01:29:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:19.399419 | orchestrator | 2026-04-09 01:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:22.449089 | orchestrator | 2026-04-09 01:29:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:22.452242 | orchestrator | 2026-04-09 01:29:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:22.452296 | orchestrator | 2026-04-09 01:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:25.507425 | orchestrator | 2026-04-09 01:29:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:25.508841 | orchestrator | 2026-04-09 01:29:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:25.508941 | orchestrator | 2026-04-09 01:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:28.559529 | orchestrator | 2026-04-09 01:29:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:28.561079 | orchestrator | 2026-04-09 01:29:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:28.561128 | orchestrator | 2026-04-09 01:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:31.615207 | orchestrator | 2026-04-09 01:29:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:31.620236 | orchestrator | 2026-04-09 01:29:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:31.620290 | orchestrator | 2026-04-09 01:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:34.663021 | orchestrator | 2026-04-09 01:29:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:34.664771 | orchestrator | 2026-04-09 01:29:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:34.664836 | orchestrator | 2026-04-09 01:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:37.709972 | orchestrator | 2026-04-09 01:29:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:37.712835 | orchestrator | 2026-04-09 01:29:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:37.712898 | orchestrator | 2026-04-09 01:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:40.757460 | orchestrator | 2026-04-09 01:29:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:40.759610 | orchestrator | 2026-04-09 01:29:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:40.759864 | orchestrator | 2026-04-09 01:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:43.816906 | orchestrator | 2026-04-09 01:29:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:43.818204 | orchestrator | 2026-04-09 01:29:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:43.818462 | orchestrator | 2026-04-09 01:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:46.865739 | orchestrator | 2026-04-09 01:29:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:46.867489 | orchestrator | 2026-04-09 01:29:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:46.867533 | orchestrator | 2026-04-09 01:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:49.913194 | orchestrator | 2026-04-09 01:29:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:49.914243 | orchestrator | 2026-04-09 01:29:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:49.914291 | orchestrator | 2026-04-09 01:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:52.957013 | orchestrator | 2026-04-09 01:29:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:52.959955 | orchestrator | 2026-04-09 01:29:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:52.960063 | orchestrator | 2026-04-09 01:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:56.005067 | orchestrator | 2026-04-09 01:29:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:56.006612 | orchestrator | 2026-04-09 01:29:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:56.006721 | orchestrator | 2026-04-09 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:29:59.058388 | orchestrator | 2026-04-09 01:29:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:29:59.059835 | orchestrator | 2026-04-09 01:29:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:29:59.059894 | orchestrator | 2026-04-09 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:02.104930 | orchestrator | 2026-04-09 01:30:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:02.105899 | orchestrator | 2026-04-09 01:30:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:02.105954 | orchestrator | 2026-04-09 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:05.157398 | orchestrator | 2026-04-09 01:30:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:05.159616 | orchestrator | 2026-04-09 01:30:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:05.159671 | orchestrator | 2026-04-09 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:08.214511 | orchestrator | 2026-04-09 01:30:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:08.215506 | orchestrator | 2026-04-09 01:30:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:08.215954 | orchestrator | 2026-04-09 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:11.267544 | orchestrator | 2026-04-09 01:30:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:11.269422 | orchestrator | 2026-04-09 01:30:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:11.269491 | orchestrator | 2026-04-09 01:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:14.330880 | orchestrator | 2026-04-09 01:30:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:14.332519 | orchestrator | 2026-04-09 01:30:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:14.332608 | orchestrator | 2026-04-09 01:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:17.380977 | orchestrator | 2026-04-09 01:30:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:17.382737 | orchestrator | 2026-04-09 01:30:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:17.382804 | orchestrator | 2026-04-09 01:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:20.427583 | orchestrator | 2026-04-09 01:30:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:20.428412 | orchestrator | 2026-04-09 01:30:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:20.428463 | orchestrator | 2026-04-09 01:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:23.479260 | orchestrator | 2026-04-09 01:30:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:23.481128 | orchestrator | 2026-04-09 01:30:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:23.481195 | orchestrator | 2026-04-09 01:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:26.529979 | orchestrator | 2026-04-09 01:30:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:26.532300 | orchestrator | 2026-04-09 01:30:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:26.532416 | orchestrator | 2026-04-09 01:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:29.575946 | orchestrator | 2026-04-09 01:30:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:29.577880 | orchestrator | 2026-04-09 01:30:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:29.578008 | orchestrator | 2026-04-09 01:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:32.624263 | orchestrator | 2026-04-09 01:30:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:32.625811 | orchestrator | 2026-04-09 01:30:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:32.626010 | orchestrator | 2026-04-09 01:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:35.674260 | orchestrator | 2026-04-09 01:30:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:35.675763 | orchestrator | 2026-04-09 01:30:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:35.675820 | orchestrator | 2026-04-09 01:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:38.719132 | orchestrator | 2026-04-09 01:30:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:38.720570 | orchestrator | 2026-04-09 01:30:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:38.720599 | orchestrator | 2026-04-09 01:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:41.775500 | orchestrator | 2026-04-09 01:30:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:41.776195 | orchestrator | 2026-04-09 01:30:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:41.776238 | orchestrator | 2026-04-09 01:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:44.823876 | orchestrator | 2026-04-09 01:30:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:44.825870 | orchestrator | 2026-04-09 01:30:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:44.825934 | orchestrator | 2026-04-09 01:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:47.876953 | orchestrator | 2026-04-09 01:30:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:47.878272 | orchestrator | 2026-04-09 01:30:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:47.878360 | orchestrator | 2026-04-09 01:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:50.928701 | orchestrator | 2026-04-09 01:30:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:50.931292 | orchestrator | 2026-04-09 01:30:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:50.931466 | orchestrator | 2026-04-09 01:30:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:53.979524 | orchestrator | 2026-04-09 01:30:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:53.980714 | orchestrator | 2026-04-09 01:30:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:53.980772 | orchestrator | 2026-04-09 01:30:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:30:57.031266 | orchestrator | 2026-04-09 01:30:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:30:57.032531 | orchestrator | 2026-04-09 01:30:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:30:57.032634 | orchestrator | 2026-04-09 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:00.078007 | orchestrator | 2026-04-09 01:31:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:00.078812 | orchestrator | 2026-04-09 01:31:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:00.078893 | orchestrator | 2026-04-09 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:03.128111 | orchestrator | 2026-04-09 01:31:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:03.129200 | orchestrator | 2026-04-09 01:31:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:03.129354 | orchestrator | 2026-04-09 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:06.171227 | orchestrator | 2026-04-09 01:31:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:06.173141 | orchestrator | 2026-04-09 01:31:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:06.173196 | orchestrator | 2026-04-09 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:09.216586 | orchestrator | 2026-04-09 01:31:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:09.217867 | orchestrator | 2026-04-09 01:31:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:09.218078 | orchestrator | 2026-04-09 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:12.259952 | orchestrator | 2026-04-09 01:31:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:12.261346 | orchestrator | 2026-04-09 01:31:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:12.261379 | orchestrator | 2026-04-09 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:15.304296 | orchestrator | 2026-04-09 01:31:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:15.304429 | orchestrator | 2026-04-09 01:31:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:15.304487 | orchestrator | 2026-04-09 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:18.349241 | orchestrator | 2026-04-09 01:31:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:18.350003 | orchestrator | 2026-04-09 01:31:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:18.350085 | orchestrator | 2026-04-09 01:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:21.394340 | orchestrator | 2026-04-09 01:31:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:21.395708 | orchestrator | 2026-04-09 01:31:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:21.395772 | orchestrator | 2026-04-09 01:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:24.442812 | orchestrator | 2026-04-09 01:31:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:24.444863 | orchestrator | 2026-04-09 01:31:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:24.444991 | orchestrator | 2026-04-09 01:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:27.490877 | orchestrator | 2026-04-09 01:31:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:27.492252 | orchestrator | 2026-04-09 01:31:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:27.492479 | orchestrator | 2026-04-09 01:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:30.535816 | orchestrator | 2026-04-09 01:31:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:30.537367 | orchestrator | 2026-04-09 01:31:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:30.537442 | orchestrator | 2026-04-09 01:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:33.583724 | orchestrator | 2026-04-09 01:31:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:33.584595 | orchestrator | 2026-04-09 01:31:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:33.584633 | orchestrator | 2026-04-09 01:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:36.628725 | orchestrator | 2026-04-09 01:31:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:36.629842 | orchestrator | 2026-04-09 01:31:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:36.629884 | orchestrator | 2026-04-09 01:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:39.677603 | orchestrator | 2026-04-09 01:31:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:39.678804 | orchestrator | 2026-04-09 01:31:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:39.678889 | orchestrator | 2026-04-09 01:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:42.729551 | orchestrator | 2026-04-09 01:31:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:42.730929 | orchestrator | 2026-04-09 01:31:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:42.731035 | orchestrator | 2026-04-09 01:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:45.781952 | orchestrator | 2026-04-09 01:31:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:45.785698 | orchestrator | 2026-04-09 01:31:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:45.785760 | orchestrator | 2026-04-09 01:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:48.841024 | orchestrator | 2026-04-09 01:31:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:48.843163 | orchestrator | 2026-04-09 01:31:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:48.843693 | orchestrator | 2026-04-09 01:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:51.894500 | orchestrator | 2026-04-09 01:31:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:51.895712 | orchestrator | 2026-04-09 01:31:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:51.895772 | orchestrator | 2026-04-09 01:31:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:54.940201 | orchestrator | 2026-04-09 01:31:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:54.943081 | orchestrator | 2026-04-09 01:31:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:54.943190 | orchestrator | 2026-04-09 01:31:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:31:57.996244 | orchestrator | 2026-04-09 01:31:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:31:57.998708 | orchestrator | 2026-04-09 01:31:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:31:57.998761 | orchestrator | 2026-04-09 01:31:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:01.048710 | orchestrator | 2026-04-09 01:32:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:01.051256 | orchestrator | 2026-04-09 01:32:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:01.051350 | orchestrator | 2026-04-09 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:04.101487 | orchestrator | 2026-04-09 01:32:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:04.102791 | orchestrator | 2026-04-09 01:32:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:04.102851 | orchestrator | 2026-04-09 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:07.154385 | orchestrator | 2026-04-09 01:32:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:07.156785 | orchestrator | 2026-04-09 01:32:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:07.156885 | orchestrator | 2026-04-09 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:10.201361 | orchestrator | 2026-04-09 01:32:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:10.205948 | orchestrator | 2026-04-09 01:32:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:10.206041 | orchestrator | 2026-04-09 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:13.254484 | orchestrator | 2026-04-09 01:32:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:13.257425 | orchestrator | 2026-04-09 01:32:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:13.257464 | orchestrator | 2026-04-09 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:16.310078 | orchestrator | 2026-04-09 01:32:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:16.311813 | orchestrator | 2026-04-09 01:32:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:16.311890 | orchestrator | 2026-04-09 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:19.364050 | orchestrator | 2026-04-09 01:32:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:19.365053 | orchestrator | 2026-04-09 01:32:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:19.365097 | orchestrator | 2026-04-09 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:22.411002 | orchestrator | 2026-04-09 01:32:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:22.415068 | orchestrator | 2026-04-09 01:32:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:22.415114 | orchestrator | 2026-04-09 01:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:25.462757 | orchestrator | 2026-04-09 01:32:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:25.464623 | orchestrator | 2026-04-09 01:32:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:25.464693 | orchestrator | 2026-04-09 01:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:28.513686 | orchestrator | 2026-04-09 01:32:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:28.517165 | orchestrator | 2026-04-09 01:32:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:28.517233 | orchestrator | 2026-04-09 01:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:31.560467 | orchestrator | 2026-04-09 01:32:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:31.562202 | orchestrator | 2026-04-09 01:32:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:31.562251 | orchestrator | 2026-04-09 01:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:34.607688 | orchestrator | 2026-04-09 01:32:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:34.609929 | orchestrator | 2026-04-09 01:32:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:34.609989 | orchestrator | 2026-04-09 01:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:37.656950 | orchestrator | 2026-04-09 01:32:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:37.658930 | orchestrator | 2026-04-09 01:32:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:37.658992 | orchestrator | 2026-04-09 01:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:40.706743 | orchestrator | 2026-04-09 01:32:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:40.708362 | orchestrator | 2026-04-09 01:32:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:40.708424 | orchestrator | 2026-04-09 01:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:43.748508 | orchestrator | 2026-04-09 01:32:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:43.748844 | orchestrator | 2026-04-09 01:32:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:43.749667 | orchestrator | 2026-04-09 01:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:46.797008 | orchestrator | 2026-04-09 01:32:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:46.799035 | orchestrator | 2026-04-09 01:32:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:46.799099 | orchestrator | 2026-04-09 01:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:49.847740 | orchestrator | 2026-04-09 01:32:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:49.850192 | orchestrator | 2026-04-09 01:32:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:49.850358 | orchestrator | 2026-04-09 01:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:52.896762 | orchestrator | 2026-04-09 01:32:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:52.899160 | orchestrator | 2026-04-09 01:32:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:52.899239 | orchestrator | 2026-04-09 01:32:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:55.949878 | orchestrator | 2026-04-09 01:32:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:55.951718 | orchestrator | 2026-04-09 01:32:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:55.951874 | orchestrator | 2026-04-09 01:32:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:32:58.996438 | orchestrator | 2026-04-09 01:32:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:32:58.999941 | orchestrator | 2026-04-09 01:32:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:32:58.999998 | orchestrator | 2026-04-09 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:02.044774 | orchestrator | 2026-04-09 01:33:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:02.046359 | orchestrator | 2026-04-09 01:33:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:02.046430 | orchestrator | 2026-04-09 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:05.090911 | orchestrator | 2026-04-09 01:33:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:05.092635 | orchestrator | 2026-04-09 01:33:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:05.092682 | orchestrator | 2026-04-09 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:08.136113 | orchestrator | 2026-04-09 01:33:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:08.136685 | orchestrator | 2026-04-09 01:33:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:08.136770 | orchestrator | 2026-04-09 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:11.185415 | orchestrator | 2026-04-09 01:33:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:11.188326 | orchestrator | 2026-04-09 01:33:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:11.188393 | orchestrator | 2026-04-09 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:14.236201 | orchestrator | 2026-04-09 01:33:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:14.238534 | orchestrator | 2026-04-09 01:33:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:14.238618 | orchestrator | 2026-04-09 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:17.281603 | orchestrator | 2026-04-09 01:33:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:17.282520 | orchestrator | 2026-04-09 01:33:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:17.282601 | orchestrator | 2026-04-09 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:20.337197 | orchestrator | 2026-04-09 01:33:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:20.338762 | orchestrator | 2026-04-09 01:33:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:20.338880 | orchestrator | 2026-04-09 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:23.384271 | orchestrator | 2026-04-09 01:33:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:23.386310 | orchestrator | 2026-04-09 01:33:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:23.386348 | orchestrator | 2026-04-09 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:26.435491 | orchestrator | 2026-04-09 01:33:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:26.439865 | orchestrator | 2026-04-09 01:33:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:26.440498 | orchestrator | 2026-04-09 01:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:29.487382 | orchestrator | 2026-04-09 01:33:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:29.489089 | orchestrator | 2026-04-09 01:33:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:29.489130 | orchestrator | 2026-04-09 01:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:32.537262 | orchestrator | 2026-04-09 01:33:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:32.541076 | orchestrator | 2026-04-09 01:33:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:32.541123 | orchestrator | 2026-04-09 01:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:35.591435 | orchestrator | 2026-04-09 01:33:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:35.595134 | orchestrator | 2026-04-09 01:33:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:35.595208 | orchestrator | 2026-04-09 01:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:38.641043 | orchestrator | 2026-04-09 01:33:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:38.641805 | orchestrator | 2026-04-09 01:33:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:38.641858 | orchestrator | 2026-04-09 01:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:41.688593 | orchestrator | 2026-04-09 01:33:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:41.689655 | orchestrator | 2026-04-09 01:33:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:41.689977 | orchestrator | 2026-04-09 01:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:44.739353 | orchestrator | 2026-04-09 01:33:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:44.740340 | orchestrator | 2026-04-09 01:33:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:44.740400 | orchestrator | 2026-04-09 01:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:47.782350 | orchestrator | 2026-04-09 01:33:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:47.784219 | orchestrator | 2026-04-09 01:33:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:47.784389 | orchestrator | 2026-04-09 01:33:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:50.826734 | orchestrator | 2026-04-09 01:33:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:50.828189 | orchestrator | 2026-04-09 01:33:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:50.828241 | orchestrator | 2026-04-09 01:33:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:53.873888 | orchestrator | 2026-04-09 01:33:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:53.875697 | orchestrator | 2026-04-09 01:33:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:53.875745 | orchestrator | 2026-04-09 01:33:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:56.915090 | orchestrator | 2026-04-09 01:33:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:56.917140 | orchestrator | 2026-04-09 01:33:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:56.917221 | orchestrator | 2026-04-09 01:33:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:33:59.963652 | orchestrator | 2026-04-09 01:33:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:33:59.966954 | orchestrator | 2026-04-09 01:33:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:33:59.967019 | orchestrator | 2026-04-09 01:33:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:03.018206 | orchestrator | 2026-04-09 01:34:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:03.019664 | orchestrator | 2026-04-09 01:34:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:03.019720 | orchestrator | 2026-04-09 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:06.068741 | orchestrator | 2026-04-09 01:34:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:06.069840 | orchestrator | 2026-04-09 01:34:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:06.069879 | orchestrator | 2026-04-09 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:09.115003 | orchestrator | 2026-04-09 01:34:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:09.117519 | orchestrator | 2026-04-09 01:34:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:09.117593 | orchestrator | 2026-04-09 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:12.166478 | orchestrator | 2026-04-09 01:34:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:12.169330 | orchestrator | 2026-04-09 01:34:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:12.169429 | orchestrator | 2026-04-09 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:15.213152 | orchestrator | 2026-04-09 01:34:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:15.215077 | orchestrator | 2026-04-09 01:34:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:15.215127 | orchestrator | 2026-04-09 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:18.260514 | orchestrator | 2026-04-09 01:34:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:18.261699 | orchestrator | 2026-04-09 01:34:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:18.261764 | orchestrator | 2026-04-09 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:21.309321 | orchestrator | 2026-04-09 01:34:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:21.310066 | orchestrator | 2026-04-09 01:34:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:21.310105 | orchestrator | 2026-04-09 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:24.356978 | orchestrator | 2026-04-09 01:34:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:24.358437 | orchestrator | 2026-04-09 01:34:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:24.358492 | orchestrator | 2026-04-09 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:27.407041 | orchestrator | 2026-04-09 01:34:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:27.408851 | orchestrator | 2026-04-09 01:34:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:27.408906 | orchestrator | 2026-04-09 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:30.461701 | orchestrator | 2026-04-09 01:34:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:30.465620 | orchestrator | 2026-04-09 01:34:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:30.465764 | orchestrator | 2026-04-09 01:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:33.512204 | orchestrator | 2026-04-09 01:34:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:33.515003 | orchestrator | 2026-04-09 01:34:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:33.515070 | orchestrator | 2026-04-09 01:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:36.568706 | orchestrator | 2026-04-09 01:34:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:36.571082 | orchestrator | 2026-04-09 01:34:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:36.571171 | orchestrator | 2026-04-09 01:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:39.616664 | orchestrator | 2026-04-09 01:34:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:39.619259 | orchestrator | 2026-04-09 01:34:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:39.619369 | orchestrator | 2026-04-09 01:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:42.671492 | orchestrator | 2026-04-09 01:34:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:42.675166 | orchestrator | 2026-04-09 01:34:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:42.675244 | orchestrator | 2026-04-09 01:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:45.719187 | orchestrator | 2026-04-09 01:34:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:45.720856 | orchestrator | 2026-04-09 01:34:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:45.720939 | orchestrator | 2026-04-09 01:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:48.763813 | orchestrator | 2026-04-09 01:34:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:48.765655 | orchestrator | 2026-04-09 01:34:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:48.765806 | orchestrator | 2026-04-09 01:34:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:51.816094 | orchestrator | 2026-04-09 01:34:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:51.818175 | orchestrator | 2026-04-09 01:34:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:51.818291 | orchestrator | 2026-04-09 01:34:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:54.857618 | orchestrator | 2026-04-09 01:34:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:54.858997 | orchestrator | 2026-04-09 01:34:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:54.859088 | orchestrator | 2026-04-09 01:34:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:34:57.907542 | orchestrator | 2026-04-09 01:34:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:34:57.909004 | orchestrator | 2026-04-09 01:34:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:34:57.909055 | orchestrator | 2026-04-09 01:34:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:00.954480 | orchestrator | 2026-04-09 01:35:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:00.956738 | orchestrator | 2026-04-09 01:35:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:00.956788 | orchestrator | 2026-04-09 01:35:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:04.004534 | orchestrator | 2026-04-09 01:35:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:04.005090 | orchestrator | 2026-04-09 01:35:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:04.005120 | orchestrator | 2026-04-09 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:07.054301 | orchestrator | 2026-04-09 01:35:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:07.055853 | orchestrator | 2026-04-09 01:35:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:07.055884 | orchestrator | 2026-04-09 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:10.100042 | orchestrator | 2026-04-09 01:35:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:10.101444 | orchestrator | 2026-04-09 01:35:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:10.101477 | orchestrator | 2026-04-09 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:13.146987 | orchestrator | 2026-04-09 01:35:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:13.148693 | orchestrator | 2026-04-09 01:35:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:13.148811 | orchestrator | 2026-04-09 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:16.194859 | orchestrator | 2026-04-09 01:35:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:16.197090 | orchestrator | 2026-04-09 01:35:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:16.197131 | orchestrator | 2026-04-09 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:19.240312 | orchestrator | 2026-04-09 01:35:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:19.241470 | orchestrator | 2026-04-09 01:35:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:19.241528 | orchestrator | 2026-04-09 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:22.285317 | orchestrator | 2026-04-09 01:35:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:22.286154 | orchestrator | 2026-04-09 01:35:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:22.287938 | orchestrator | 2026-04-09 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:25.326690 | orchestrator | 2026-04-09 01:35:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:25.328967 | orchestrator | 2026-04-09 01:35:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:25.329015 | orchestrator | 2026-04-09 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:28.380004 | orchestrator | 2026-04-09 01:35:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:28.382615 | orchestrator | 2026-04-09 01:35:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:28.382708 | orchestrator | 2026-04-09 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:31.429024 | orchestrator | 2026-04-09 01:35:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:31.430790 | orchestrator | 2026-04-09 01:35:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:31.430852 | orchestrator | 2026-04-09 01:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:34.476553 | orchestrator | 2026-04-09 01:35:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:34.476705 | orchestrator | 2026-04-09 01:35:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:34.476721 | orchestrator | 2026-04-09 01:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:37.527708 | orchestrator | 2026-04-09 01:35:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:37.531483 | orchestrator | 2026-04-09 01:35:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:37.531550 | orchestrator | 2026-04-09 01:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:40.577484 | orchestrator | 2026-04-09 01:35:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:40.577650 | orchestrator | 2026-04-09 01:35:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:40.577663 | orchestrator | 2026-04-09 01:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:43.629404 | orchestrator | 2026-04-09 01:35:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:43.631546 | orchestrator | 2026-04-09 01:35:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:43.631620 | orchestrator | 2026-04-09 01:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:46.675969 | orchestrator | 2026-04-09 01:35:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:46.677321 | orchestrator | 2026-04-09 01:35:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:46.677364 | orchestrator | 2026-04-09 01:35:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:49.723222 | orchestrator | 2026-04-09 01:35:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:49.725632 | orchestrator | 2026-04-09 01:35:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:49.725700 | orchestrator | 2026-04-09 01:35:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:52.773227 | orchestrator | 2026-04-09 01:35:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:52.775394 | orchestrator | 2026-04-09 01:35:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:52.775412 | orchestrator | 2026-04-09 01:35:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:55.820601 | orchestrator | 2026-04-09 01:35:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:55.821829 | orchestrator | 2026-04-09 01:35:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:55.821971 | orchestrator | 2026-04-09 01:35:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:35:58.868982 | orchestrator | 2026-04-09 01:35:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:35:58.870749 | orchestrator | 2026-04-09 01:35:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:35:58.870880 | orchestrator | 2026-04-09 01:35:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:01.921607 | orchestrator | 2026-04-09 01:36:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:01.922622 | orchestrator | 2026-04-09 01:36:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:01.922838 | orchestrator | 2026-04-09 01:36:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:04.968139 | orchestrator | 2026-04-09 01:36:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:04.969900 | orchestrator | 2026-04-09 01:36:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:04.970100 | orchestrator | 2026-04-09 01:36:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:08.017382 | orchestrator | 2026-04-09 01:36:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:08.018355 | orchestrator | 2026-04-09 01:36:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:08.018393 | orchestrator | 2026-04-09 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:11.063446 | orchestrator | 2026-04-09 01:36:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:11.065356 | orchestrator | 2026-04-09 01:36:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:11.065418 | orchestrator | 2026-04-09 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:14.110302 | orchestrator | 2026-04-09 01:36:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:14.112214 | orchestrator | 2026-04-09 01:36:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:14.112327 | orchestrator | 2026-04-09 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:17.158673 | orchestrator | 2026-04-09 01:36:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:17.160471 | orchestrator | 2026-04-09 01:36:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:17.160541 | orchestrator | 2026-04-09 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:20.208859 | orchestrator | 2026-04-09 01:36:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:20.209979 | orchestrator | 2026-04-09 01:36:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:20.210138 | orchestrator | 2026-04-09 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:23.257879 | orchestrator | 2026-04-09 01:36:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:23.609390 | orchestrator | 2026-04-09 01:36:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:23.609459 | orchestrator | 2026-04-09 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:26.304642 | orchestrator | 2026-04-09 01:36:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:26.305723 | orchestrator | 2026-04-09 01:36:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:26.305759 | orchestrator | 2026-04-09 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:29.353635 | orchestrator | 2026-04-09 01:36:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:29.355453 | orchestrator | 2026-04-09 01:36:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:29.355588 | orchestrator | 2026-04-09 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:32.395392 | orchestrator | 2026-04-09 01:36:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:32.397550 | orchestrator | 2026-04-09 01:36:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:32.397627 | orchestrator | 2026-04-09 01:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:35.445751 | orchestrator | 2026-04-09 01:36:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:35.447652 | orchestrator | 2026-04-09 01:36:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:35.447712 | orchestrator | 2026-04-09 01:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:38.496731 | orchestrator | 2026-04-09 01:36:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:38.498450 | orchestrator | 2026-04-09 01:36:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:39.138362 | orchestrator | 2026-04-09 01:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:41.547290 | orchestrator | 2026-04-09 01:36:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:41.549650 | orchestrator | 2026-04-09 01:36:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:41.549811 | orchestrator | 2026-04-09 01:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:44.600603 | orchestrator | 2026-04-09 01:36:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:44.605162 | orchestrator | 2026-04-09 01:36:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:44.605304 | orchestrator | 2026-04-09 01:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:47.650840 | orchestrator | 2026-04-09 01:36:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:47.652219 | orchestrator | 2026-04-09 01:36:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:47.652317 | orchestrator | 2026-04-09 01:36:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:50.697366 | orchestrator | 2026-04-09 01:36:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:50.698274 | orchestrator | 2026-04-09 01:36:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:50.698312 | orchestrator | 2026-04-09 01:36:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:53.753097 | orchestrator | 2026-04-09 01:36:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:53.755572 | orchestrator | 2026-04-09 01:36:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:53.755632 | orchestrator | 2026-04-09 01:36:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:56.803204 | orchestrator | 2026-04-09 01:36:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:56.804799 | orchestrator | 2026-04-09 01:36:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:56.804861 | orchestrator | 2026-04-09 01:36:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:36:59.849929 | orchestrator | 2026-04-09 01:36:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:36:59.851174 | orchestrator | 2026-04-09 01:36:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:36:59.851324 | orchestrator | 2026-04-09 01:36:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:02.892526 | orchestrator | 2026-04-09 01:37:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:02.894787 | orchestrator | 2026-04-09 01:37:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:02.894884 | orchestrator | 2026-04-09 01:37:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:05.940286 | orchestrator | 2026-04-09 01:37:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:05.942296 | orchestrator | 2026-04-09 01:37:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:05.942333 | orchestrator | 2026-04-09 01:37:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:08.990657 | orchestrator | 2026-04-09 01:37:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:08.992513 | orchestrator | 2026-04-09 01:37:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:08.992981 | orchestrator | 2026-04-09 01:37:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:12.034539 | orchestrator | 2026-04-09 01:37:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:12.232397 | orchestrator | 2026-04-09 01:37:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:12.232467 | orchestrator | 2026-04-09 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:15.086393 | orchestrator | 2026-04-09 01:37:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:15.086863 | orchestrator | 2026-04-09 01:37:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:15.086905 | orchestrator | 2026-04-09 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:18.134662 | orchestrator | 2026-04-09 01:37:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:18.136020 | orchestrator | 2026-04-09 01:37:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:18.136350 | orchestrator | 2026-04-09 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:21.183926 | orchestrator | 2026-04-09 01:37:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:21.185162 | orchestrator | 2026-04-09 01:37:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:21.185319 | orchestrator | 2026-04-09 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:24.233021 | orchestrator | 2026-04-09 01:37:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:24.234630 | orchestrator | 2026-04-09 01:37:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:24.234777 | orchestrator | 2026-04-09 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:27.281704 | orchestrator | 2026-04-09 01:37:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:27.282338 | orchestrator | 2026-04-09 01:37:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:27.282973 | orchestrator | 2026-04-09 01:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:30.329490 | orchestrator | 2026-04-09 01:37:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:30.330754 | orchestrator | 2026-04-09 01:37:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:30.331197 | orchestrator | 2026-04-09 01:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:33.383053 | orchestrator | 2026-04-09 01:37:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:33.383730 | orchestrator | 2026-04-09 01:37:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:33.383829 | orchestrator | 2026-04-09 01:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:36.435634 | orchestrator | 2026-04-09 01:37:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:36.436330 | orchestrator | 2026-04-09 01:37:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:36.436376 | orchestrator | 2026-04-09 01:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:39.482483 | orchestrator | 2026-04-09 01:37:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:39.484731 | orchestrator | 2026-04-09 01:37:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:39.484777 | orchestrator | 2026-04-09 01:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:42.530000 | orchestrator | 2026-04-09 01:37:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:42.532683 | orchestrator | 2026-04-09 01:37:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:42.532748 | orchestrator | 2026-04-09 01:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:45.572563 | orchestrator | 2026-04-09 01:37:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:45.573761 | orchestrator | 2026-04-09 01:37:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:45.573814 | orchestrator | 2026-04-09 01:37:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:48.619175 | orchestrator | 2026-04-09 01:37:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:48.622324 | orchestrator | 2026-04-09 01:37:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:48.622398 | orchestrator | 2026-04-09 01:37:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:51.679859 | orchestrator | 2026-04-09 01:37:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:51.682459 | orchestrator | 2026-04-09 01:37:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:51.682640 | orchestrator | 2026-04-09 01:37:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:54.729738 | orchestrator | 2026-04-09 01:37:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:54.732110 | orchestrator | 2026-04-09 01:37:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:54.732465 | orchestrator | 2026-04-09 01:37:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:37:57.774948 | orchestrator | 2026-04-09 01:37:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:37:57.776730 | orchestrator | 2026-04-09 01:37:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:37:57.776816 | orchestrator | 2026-04-09 01:37:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:00.816813 | orchestrator | 2026-04-09 01:38:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:00.818796 | orchestrator | 2026-04-09 01:38:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:00.818951 | orchestrator | 2026-04-09 01:38:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:03.862118 | orchestrator | 2026-04-09 01:38:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:04.047237 | orchestrator | 2026-04-09 01:38:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:04.047292 | orchestrator | 2026-04-09 01:38:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:06.927692 | orchestrator | 2026-04-09 01:38:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:06.928753 | orchestrator | 2026-04-09 01:38:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:06.928854 | orchestrator | 2026-04-09 01:38:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:09.989473 | orchestrator | 2026-04-09 01:38:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:09.991388 | orchestrator | 2026-04-09 01:38:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:09.991455 | orchestrator | 2026-04-09 01:38:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:13.043234 | orchestrator | 2026-04-09 01:38:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:13.044976 | orchestrator | 2026-04-09 01:38:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:13.045064 | orchestrator | 2026-04-09 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:16.103431 | orchestrator | 2026-04-09 01:38:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:16.103672 | orchestrator | 2026-04-09 01:38:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:16.103699 | orchestrator | 2026-04-09 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:19.164899 | orchestrator | 2026-04-09 01:38:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:19.166660 | orchestrator | 2026-04-09 01:38:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:19.166721 | orchestrator | 2026-04-09 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:22.224721 | orchestrator | 2026-04-09 01:38:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:22.226754 | orchestrator | 2026-04-09 01:38:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:22.226815 | orchestrator | 2026-04-09 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:25.279175 | orchestrator | 2026-04-09 01:38:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:25.280242 | orchestrator | 2026-04-09 01:38:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:25.280828 | orchestrator | 2026-04-09 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:28.325580 | orchestrator | 2026-04-09 01:38:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:28.328372 | orchestrator | 2026-04-09 01:38:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:28.328515 | orchestrator | 2026-04-09 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:31.378091 | orchestrator | 2026-04-09 01:38:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:31.381683 | orchestrator | 2026-04-09 01:38:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:31.381775 | orchestrator | 2026-04-09 01:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:34.427187 | orchestrator | 2026-04-09 01:38:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:34.429833 | orchestrator | 2026-04-09 01:38:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:34.429953 | orchestrator | 2026-04-09 01:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:37.485084 | orchestrator | 2026-04-09 01:38:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:37.487386 | orchestrator | 2026-04-09 01:38:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:37.487481 | orchestrator | 2026-04-09 01:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:40.533697 | orchestrator | 2026-04-09 01:38:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:40.535963 | orchestrator | 2026-04-09 01:38:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:40.536006 | orchestrator | 2026-04-09 01:38:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:43.583021 | orchestrator | 2026-04-09 01:38:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:43.585579 | orchestrator | 2026-04-09 01:38:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:43.585634 | orchestrator | 2026-04-09 01:38:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:46.630486 | orchestrator | 2026-04-09 01:38:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:46.632090 | orchestrator | 2026-04-09 01:38:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:46.632346 | orchestrator | 2026-04-09 01:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:49.678438 | orchestrator | 2026-04-09 01:38:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:49.679574 | orchestrator | 2026-04-09 01:38:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:49.679624 | orchestrator | 2026-04-09 01:38:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:52.736527 | orchestrator | 2026-04-09 01:38:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:52.737798 | orchestrator | 2026-04-09 01:38:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:52.737833 | orchestrator | 2026-04-09 01:38:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:55.786915 | orchestrator | 2026-04-09 01:38:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:55.788285 | orchestrator | 2026-04-09 01:38:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:55.788430 | orchestrator | 2026-04-09 01:38:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:38:58.830484 | orchestrator | 2026-04-09 01:38:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:38:58.830655 | orchestrator | 2026-04-09 01:38:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:38:58.830676 | orchestrator | 2026-04-09 01:38:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:01.879551 | orchestrator | 2026-04-09 01:39:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:01.881754 | orchestrator | 2026-04-09 01:39:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:01.881813 | orchestrator | 2026-04-09 01:39:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:04.923607 | orchestrator | 2026-04-09 01:39:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:04.925101 | orchestrator | 2026-04-09 01:39:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:04.925499 | orchestrator | 2026-04-09 01:39:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:07.975856 | orchestrator | 2026-04-09 01:39:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:07.978113 | orchestrator | 2026-04-09 01:39:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:07.978181 | orchestrator | 2026-04-09 01:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:11.023938 | orchestrator | 2026-04-09 01:39:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:11.026603 | orchestrator | 2026-04-09 01:39:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:11.026654 | orchestrator | 2026-04-09 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:14.067424 | orchestrator | 2026-04-09 01:39:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:14.068691 | orchestrator | 2026-04-09 01:39:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:14.068818 | orchestrator | 2026-04-09 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:17.121421 | orchestrator | 2026-04-09 01:39:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:17.122649 | orchestrator | 2026-04-09 01:39:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:17.122766 | orchestrator | 2026-04-09 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:20.169099 | orchestrator | 2026-04-09 01:39:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:20.172703 | orchestrator | 2026-04-09 01:39:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:20.172769 | orchestrator | 2026-04-09 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:23.216509 | orchestrator | 2026-04-09 01:39:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:23.219444 | orchestrator | 2026-04-09 01:39:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:23.219700 | orchestrator | 2026-04-09 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:26.268022 | orchestrator | 2026-04-09 01:39:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:26.269634 | orchestrator | 2026-04-09 01:39:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:26.269755 | orchestrator | 2026-04-09 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:29.318965 | orchestrator | 2026-04-09 01:39:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:29.321130 | orchestrator | 2026-04-09 01:39:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:29.321242 | orchestrator | 2026-04-09 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:32.368560 | orchestrator | 2026-04-09 01:39:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:32.371402 | orchestrator | 2026-04-09 01:39:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:32.371444 | orchestrator | 2026-04-09 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:35.420152 | orchestrator | 2026-04-09 01:39:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:35.422647 | orchestrator | 2026-04-09 01:39:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:35.422752 | orchestrator | 2026-04-09 01:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:38.475530 | orchestrator | 2026-04-09 01:39:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:38.476055 | orchestrator | 2026-04-09 01:39:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:38.476773 | orchestrator | 2026-04-09 01:39:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:41.528057 | orchestrator | 2026-04-09 01:39:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:41.530510 | orchestrator | 2026-04-09 01:39:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:41.530571 | orchestrator | 2026-04-09 01:39:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:44.580916 | orchestrator | 2026-04-09 01:39:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:44.583116 | orchestrator | 2026-04-09 01:39:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:44.583156 | orchestrator | 2026-04-09 01:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:47.631022 | orchestrator | 2026-04-09 01:39:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:47.633018 | orchestrator | 2026-04-09 01:39:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:47.633302 | orchestrator | 2026-04-09 01:39:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:50.685785 | orchestrator | 2026-04-09 01:39:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:50.686902 | orchestrator | 2026-04-09 01:39:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:50.686934 | orchestrator | 2026-04-09 01:39:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:53.739071 | orchestrator | 2026-04-09 01:39:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:53.740909 | orchestrator | 2026-04-09 01:39:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:53.740953 | orchestrator | 2026-04-09 01:39:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:56.796701 | orchestrator | 2026-04-09 01:39:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:39:56.798992 | orchestrator | 2026-04-09 01:39:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:39:56.799080 | orchestrator | 2026-04-09 01:39:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:39:59.851615 | orchestrator | 2026-04-09 01:39:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:00.371609 | orchestrator | 2026-04-09 01:39:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:00.371662 | orchestrator | 2026-04-09 01:39:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:02.899017 | orchestrator | 2026-04-09 01:40:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:02.902433 | orchestrator | 2026-04-09 01:40:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:02.902494 | orchestrator | 2026-04-09 01:40:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:05.953294 | orchestrator | 2026-04-09 01:40:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:05.954351 | orchestrator | 2026-04-09 01:40:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:05.954811 | orchestrator | 2026-04-09 01:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:09.003319 | orchestrator | 2026-04-09 01:40:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:09.006448 | orchestrator | 2026-04-09 01:40:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:09.006649 | orchestrator | 2026-04-09 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:12.061740 | orchestrator | 2026-04-09 01:40:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:12.064131 | orchestrator | 2026-04-09 01:40:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:12.064295 | orchestrator | 2026-04-09 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:15.111839 | orchestrator | 2026-04-09 01:40:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:15.112387 | orchestrator | 2026-04-09 01:40:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:15.112420 | orchestrator | 2026-04-09 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:18.157705 | orchestrator | 2026-04-09 01:40:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:18.159165 | orchestrator | 2026-04-09 01:40:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:18.159404 | orchestrator | 2026-04-09 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:21.210717 | orchestrator | 2026-04-09 01:40:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:21.213307 | orchestrator | 2026-04-09 01:40:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:21.213463 | orchestrator | 2026-04-09 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:24.262262 | orchestrator | 2026-04-09 01:40:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:24.263314 | orchestrator | 2026-04-09 01:40:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:24.263354 | orchestrator | 2026-04-09 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:27.315609 | orchestrator | 2026-04-09 01:40:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:27.317162 | orchestrator | 2026-04-09 01:40:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:27.317339 | orchestrator | 2026-04-09 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:30.363559 | orchestrator | 2026-04-09 01:40:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:30.365524 | orchestrator | 2026-04-09 01:40:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:30.365915 | orchestrator | 2026-04-09 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:33.417585 | orchestrator | 2026-04-09 01:40:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:33.420293 | orchestrator | 2026-04-09 01:40:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:33.420352 | orchestrator | 2026-04-09 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:36.462642 | orchestrator | 2026-04-09 01:40:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:36.463998 | orchestrator | 2026-04-09 01:40:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:36.464026 | orchestrator | 2026-04-09 01:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:39.509947 | orchestrator | 2026-04-09 01:40:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:39.512191 | orchestrator | 2026-04-09 01:40:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:39.512358 | orchestrator | 2026-04-09 01:40:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:42.560109 | orchestrator | 2026-04-09 01:40:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:42.562767 | orchestrator | 2026-04-09 01:40:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:42.562820 | orchestrator | 2026-04-09 01:40:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:45.612883 | orchestrator | 2026-04-09 01:40:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:45.614402 | orchestrator | 2026-04-09 01:40:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:45.614482 | orchestrator | 2026-04-09 01:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:48.660947 | orchestrator | 2026-04-09 01:40:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:48.663240 | orchestrator | 2026-04-09 01:40:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:48.663375 | orchestrator | 2026-04-09 01:40:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:51.711718 | orchestrator | 2026-04-09 01:40:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:51.713755 | orchestrator | 2026-04-09 01:40:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:51.713814 | orchestrator | 2026-04-09 01:40:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:54.758730 | orchestrator | 2026-04-09 01:40:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:54.760457 | orchestrator | 2026-04-09 01:40:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:54.760522 | orchestrator | 2026-04-09 01:40:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:40:57.808662 | orchestrator | 2026-04-09 01:40:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:40:57.810773 | orchestrator | 2026-04-09 01:40:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:40:57.810821 | orchestrator | 2026-04-09 01:40:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:00.854007 | orchestrator | 2026-04-09 01:41:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:00.855761 | orchestrator | 2026-04-09 01:41:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:00.855802 | orchestrator | 2026-04-09 01:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:03.906672 | orchestrator | 2026-04-09 01:41:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:03.908339 | orchestrator | 2026-04-09 01:41:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:03.908491 | orchestrator | 2026-04-09 01:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:06.954928 | orchestrator | 2026-04-09 01:41:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:06.956094 | orchestrator | 2026-04-09 01:41:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:06.956155 | orchestrator | 2026-04-09 01:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:09.999877 | orchestrator | 2026-04-09 01:41:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:10.000872 | orchestrator | 2026-04-09 01:41:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:10.000938 | orchestrator | 2026-04-09 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:13.052432 | orchestrator | 2026-04-09 01:41:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:13.054061 | orchestrator | 2026-04-09 01:41:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:13.054133 | orchestrator | 2026-04-09 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:16.101566 | orchestrator | 2026-04-09 01:41:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:16.103044 | orchestrator | 2026-04-09 01:41:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:16.103093 | orchestrator | 2026-04-09 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:19.142595 | orchestrator | 2026-04-09 01:41:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:19.145086 | orchestrator | 2026-04-09 01:41:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:19.145169 | orchestrator | 2026-04-09 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:22.191300 | orchestrator | 2026-04-09 01:41:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:22.193128 | orchestrator | 2026-04-09 01:41:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:22.193185 | orchestrator | 2026-04-09 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:25.237903 | orchestrator | 2026-04-09 01:41:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:25.241000 | orchestrator | 2026-04-09 01:41:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:25.242076 | orchestrator | 2026-04-09 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:28.284050 | orchestrator | 2026-04-09 01:41:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:28.285353 | orchestrator | 2026-04-09 01:41:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:28.285493 | orchestrator | 2026-04-09 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:31.324597 | orchestrator | 2026-04-09 01:41:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:31.325764 | orchestrator | 2026-04-09 01:41:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:31.325811 | orchestrator | 2026-04-09 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:34.369714 | orchestrator | 2026-04-09 01:41:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:34.371426 | orchestrator | 2026-04-09 01:41:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:34.371487 | orchestrator | 2026-04-09 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:37.418843 | orchestrator | 2026-04-09 01:41:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:37.420593 | orchestrator | 2026-04-09 01:41:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:37.420668 | orchestrator | 2026-04-09 01:41:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:40.468117 | orchestrator | 2026-04-09 01:41:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:40.469104 | orchestrator | 2026-04-09 01:41:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:40.469153 | orchestrator | 2026-04-09 01:41:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:43.516294 | orchestrator | 2026-04-09 01:41:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:43.518264 | orchestrator | 2026-04-09 01:41:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:43.518312 | orchestrator | 2026-04-09 01:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:46.564699 | orchestrator | 2026-04-09 01:41:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:46.566405 | orchestrator | 2026-04-09 01:41:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:46.566491 | orchestrator | 2026-04-09 01:41:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:49.613287 | orchestrator | 2026-04-09 01:41:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:49.614363 | orchestrator | 2026-04-09 01:41:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:49.614441 | orchestrator | 2026-04-09 01:41:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:52.657717 | orchestrator | 2026-04-09 01:41:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:52.659548 | orchestrator | 2026-04-09 01:41:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:52.659602 | orchestrator | 2026-04-09 01:41:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:55.707354 | orchestrator | 2026-04-09 01:41:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:55.708996 | orchestrator | 2026-04-09 01:41:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:55.709093 | orchestrator | 2026-04-09 01:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:41:58.758697 | orchestrator | 2026-04-09 01:41:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:41:58.760561 | orchestrator | 2026-04-09 01:41:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:41:58.760908 | orchestrator | 2026-04-09 01:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:01.811933 | orchestrator | 2026-04-09 01:42:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:01.814393 | orchestrator | 2026-04-09 01:42:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:01.814471 | orchestrator | 2026-04-09 01:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:04.868278 | orchestrator | 2026-04-09 01:42:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:04.869570 | orchestrator | 2026-04-09 01:42:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:04.869671 | orchestrator | 2026-04-09 01:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:07.922626 | orchestrator | 2026-04-09 01:42:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:07.924297 | orchestrator | 2026-04-09 01:42:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:07.924339 | orchestrator | 2026-04-09 01:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:10.972313 | orchestrator | 2026-04-09 01:42:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:10.973425 | orchestrator | 2026-04-09 01:42:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:10.973470 | orchestrator | 2026-04-09 01:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:14.020338 | orchestrator | 2026-04-09 01:42:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:14.022439 | orchestrator | 2026-04-09 01:42:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:14.022514 | orchestrator | 2026-04-09 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:17.075028 | orchestrator | 2026-04-09 01:42:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:17.076637 | orchestrator | 2026-04-09 01:42:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:17.076710 | orchestrator | 2026-04-09 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:20.128899 | orchestrator | 2026-04-09 01:42:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:20.130945 | orchestrator | 2026-04-09 01:42:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:20.131019 | orchestrator | 2026-04-09 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:23.175312 | orchestrator | 2026-04-09 01:42:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:23.176368 | orchestrator | 2026-04-09 01:42:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:23.176572 | orchestrator | 2026-04-09 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:26.226659 | orchestrator | 2026-04-09 01:42:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:26.227847 | orchestrator | 2026-04-09 01:42:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:26.227885 | orchestrator | 2026-04-09 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:29.274811 | orchestrator | 2026-04-09 01:42:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:29.276672 | orchestrator | 2026-04-09 01:42:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:29.276706 | orchestrator | 2026-04-09 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:32.315782 | orchestrator | 2026-04-09 01:42:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:32.317546 | orchestrator | 2026-04-09 01:42:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:32.317637 | orchestrator | 2026-04-09 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:35.365127 | orchestrator | 2026-04-09 01:42:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:35.367296 | orchestrator | 2026-04-09 01:42:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:35.367397 | orchestrator | 2026-04-09 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:38.411907 | orchestrator | 2026-04-09 01:42:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:38.413428 | orchestrator | 2026-04-09 01:42:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:38.413475 | orchestrator | 2026-04-09 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:41.453397 | orchestrator | 2026-04-09 01:42:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:41.455519 | orchestrator | 2026-04-09 01:42:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:41.455546 | orchestrator | 2026-04-09 01:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:44.499688 | orchestrator | 2026-04-09 01:42:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:44.500340 | orchestrator | 2026-04-09 01:42:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:44.500594 | orchestrator | 2026-04-09 01:42:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:47.547588 | orchestrator | 2026-04-09 01:42:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:47.550352 | orchestrator | 2026-04-09 01:42:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:47.550442 | orchestrator | 2026-04-09 01:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:50.597399 | orchestrator | 2026-04-09 01:42:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:50.599003 | orchestrator | 2026-04-09 01:42:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:50.599102 | orchestrator | 2026-04-09 01:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:53.643584 | orchestrator | 2026-04-09 01:42:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:53.645115 | orchestrator | 2026-04-09 01:42:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:53.645195 | orchestrator | 2026-04-09 01:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:56.688826 | orchestrator | 2026-04-09 01:42:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:56.690333 | orchestrator | 2026-04-09 01:42:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:56.690408 | orchestrator | 2026-04-09 01:42:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:42:59.740947 | orchestrator | 2026-04-09 01:42:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:42:59.743588 | orchestrator | 2026-04-09 01:42:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:42:59.743697 | orchestrator | 2026-04-09 01:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:02.784566 | orchestrator | 2026-04-09 01:43:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:02.786263 | orchestrator | 2026-04-09 01:43:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:02.786314 | orchestrator | 2026-04-09 01:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:05.832613 | orchestrator | 2026-04-09 01:43:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:05.834661 | orchestrator | 2026-04-09 01:43:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:05.835145 | orchestrator | 2026-04-09 01:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:08.882338 | orchestrator | 2026-04-09 01:43:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:08.884492 | orchestrator | 2026-04-09 01:43:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:08.884558 | orchestrator | 2026-04-09 01:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:11.931489 | orchestrator | 2026-04-09 01:43:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:11.933776 | orchestrator | 2026-04-09 01:43:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:11.933940 | orchestrator | 2026-04-09 01:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:14.980516 | orchestrator | 2026-04-09 01:43:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:14.982008 | orchestrator | 2026-04-09 01:43:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:14.982148 | orchestrator | 2026-04-09 01:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:18.026117 | orchestrator | 2026-04-09 01:43:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:18.027368 | orchestrator | 2026-04-09 01:43:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:18.027484 | orchestrator | 2026-04-09 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:21.068924 | orchestrator | 2026-04-09 01:43:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:21.070800 | orchestrator | 2026-04-09 01:43:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:21.070867 | orchestrator | 2026-04-09 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:24.110612 | orchestrator | 2026-04-09 01:43:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:24.112665 | orchestrator | 2026-04-09 01:43:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:24.112765 | orchestrator | 2026-04-09 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:27.159618 | orchestrator | 2026-04-09 01:43:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:27.161145 | orchestrator | 2026-04-09 01:43:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:27.161167 | orchestrator | 2026-04-09 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:30.211979 | orchestrator | 2026-04-09 01:43:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:30.213320 | orchestrator | 2026-04-09 01:43:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:30.213947 | orchestrator | 2026-04-09 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:33.266748 | orchestrator | 2026-04-09 01:43:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:33.271816 | orchestrator | 2026-04-09 01:43:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:33.271890 | orchestrator | 2026-04-09 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:36.320797 | orchestrator | 2026-04-09 01:43:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:36.322949 | orchestrator | 2026-04-09 01:43:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:36.323015 | orchestrator | 2026-04-09 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:39.373998 | orchestrator | 2026-04-09 01:43:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:39.375087 | orchestrator | 2026-04-09 01:43:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:39.375126 | orchestrator | 2026-04-09 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:42.420603 | orchestrator | 2026-04-09 01:43:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:42.422495 | orchestrator | 2026-04-09 01:43:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:42.422545 | orchestrator | 2026-04-09 01:43:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:45.466313 | orchestrator | 2026-04-09 01:43:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:45.467546 | orchestrator | 2026-04-09 01:43:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:45.467603 | orchestrator | 2026-04-09 01:43:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:48.513825 | orchestrator | 2026-04-09 01:43:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:48.514975 | orchestrator | 2026-04-09 01:43:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:48.515019 | orchestrator | 2026-04-09 01:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:51.559975 | orchestrator | 2026-04-09 01:43:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:51.561715 | orchestrator | 2026-04-09 01:43:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:51.561780 | orchestrator | 2026-04-09 01:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:54.609693 | orchestrator | 2026-04-09 01:43:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:54.610447 | orchestrator | 2026-04-09 01:43:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:54.611000 | orchestrator | 2026-04-09 01:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:43:57.657480 | orchestrator | 2026-04-09 01:43:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:43:57.658990 | orchestrator | 2026-04-09 01:43:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:43:57.659033 | orchestrator | 2026-04-09 01:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:00.701755 | orchestrator | 2026-04-09 01:44:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:00.703815 | orchestrator | 2026-04-09 01:44:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:00.703919 | orchestrator | 2026-04-09 01:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:03.755372 | orchestrator | 2026-04-09 01:44:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:03.757919 | orchestrator | 2026-04-09 01:44:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:03.758074 | orchestrator | 2026-04-09 01:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:06.799549 | orchestrator | 2026-04-09 01:44:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:06.800874 | orchestrator | 2026-04-09 01:44:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:06.800914 | orchestrator | 2026-04-09 01:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:09.850968 | orchestrator | 2026-04-09 01:44:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:09.852804 | orchestrator | 2026-04-09 01:44:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:09.852974 | orchestrator | 2026-04-09 01:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:12.899387 | orchestrator | 2026-04-09 01:44:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:12.900589 | orchestrator | 2026-04-09 01:44:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:12.900705 | orchestrator | 2026-04-09 01:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:15.942323 | orchestrator | 2026-04-09 01:44:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:15.943798 | orchestrator | 2026-04-09 01:44:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:15.943879 | orchestrator | 2026-04-09 01:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:18.988764 | orchestrator | 2026-04-09 01:44:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:18.990459 | orchestrator | 2026-04-09 01:44:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:18.990516 | orchestrator | 2026-04-09 01:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:22.036305 | orchestrator | 2026-04-09 01:44:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:22.038793 | orchestrator | 2026-04-09 01:44:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:22.038939 | orchestrator | 2026-04-09 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:25.085011 | orchestrator | 2026-04-09 01:44:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:25.086537 | orchestrator | 2026-04-09 01:44:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:25.086600 | orchestrator | 2026-04-09 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:28.128787 | orchestrator | 2026-04-09 01:44:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:28.130315 | orchestrator | 2026-04-09 01:44:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:28.130390 | orchestrator | 2026-04-09 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:31.174399 | orchestrator | 2026-04-09 01:44:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:31.175806 | orchestrator | 2026-04-09 01:44:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:31.175873 | orchestrator | 2026-04-09 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:34.226979 | orchestrator | 2026-04-09 01:44:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:34.228484 | orchestrator | 2026-04-09 01:44:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:34.228544 | orchestrator | 2026-04-09 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:37.276643 | orchestrator | 2026-04-09 01:44:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:37.279215 | orchestrator | 2026-04-09 01:44:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:37.279367 | orchestrator | 2026-04-09 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:40.324521 | orchestrator | 2026-04-09 01:44:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:40.326961 | orchestrator | 2026-04-09 01:44:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:40.327016 | orchestrator | 2026-04-09 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:43.376472 | orchestrator | 2026-04-09 01:44:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:43.379555 | orchestrator | 2026-04-09 01:44:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:43.379636 | orchestrator | 2026-04-09 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:46.420939 | orchestrator | 2026-04-09 01:44:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:46.423220 | orchestrator | 2026-04-09 01:44:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:46.423342 | orchestrator | 2026-04-09 01:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:49.474434 | orchestrator | 2026-04-09 01:44:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:49.475969 | orchestrator | 2026-04-09 01:44:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:49.476023 | orchestrator | 2026-04-09 01:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:52.522413 | orchestrator | 2026-04-09 01:44:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:52.525013 | orchestrator | 2026-04-09 01:44:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:52.525097 | orchestrator | 2026-04-09 01:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:55.566951 | orchestrator | 2026-04-09 01:44:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:55.568636 | orchestrator | 2026-04-09 01:44:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:55.568855 | orchestrator | 2026-04-09 01:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:44:58.615222 | orchestrator | 2026-04-09 01:44:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:44:58.616769 | orchestrator | 2026-04-09 01:44:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:44:58.616882 | orchestrator | 2026-04-09 01:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:01.661180 | orchestrator | 2026-04-09 01:45:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:01.661961 | orchestrator | 2026-04-09 01:45:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:01.661992 | orchestrator | 2026-04-09 01:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:04.712761 | orchestrator | 2026-04-09 01:45:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:04.714285 | orchestrator | 2026-04-09 01:45:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:04.714445 | orchestrator | 2026-04-09 01:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:07.762672 | orchestrator | 2026-04-09 01:45:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:07.764497 | orchestrator | 2026-04-09 01:45:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:07.764608 | orchestrator | 2026-04-09 01:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:10.805782 | orchestrator | 2026-04-09 01:45:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:10.806482 | orchestrator | 2026-04-09 01:45:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:10.806544 | orchestrator | 2026-04-09 01:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:13.848795 | orchestrator | 2026-04-09 01:45:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:13.850867 | orchestrator | 2026-04-09 01:45:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:13.851020 | orchestrator | 2026-04-09 01:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:16.891214 | orchestrator | 2026-04-09 01:45:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:16.892569 | orchestrator | 2026-04-09 01:45:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:16.892620 | orchestrator | 2026-04-09 01:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:19.935665 | orchestrator | 2026-04-09 01:45:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:19.936937 | orchestrator | 2026-04-09 01:45:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:19.937011 | orchestrator | 2026-04-09 01:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:22.982295 | orchestrator | 2026-04-09 01:45:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:22.983906 | orchestrator | 2026-04-09 01:45:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:22.983989 | orchestrator | 2026-04-09 01:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:26.028964 | orchestrator | 2026-04-09 01:45:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:26.030758 | orchestrator | 2026-04-09 01:45:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:26.030794 | orchestrator | 2026-04-09 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:29.071560 | orchestrator | 2026-04-09 01:45:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:29.072983 | orchestrator | 2026-04-09 01:45:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:29.073028 | orchestrator | 2026-04-09 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:32.116544 | orchestrator | 2026-04-09 01:45:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:32.119170 | orchestrator | 2026-04-09 01:45:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:32.119262 | orchestrator | 2026-04-09 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:35.160817 | orchestrator | 2026-04-09 01:45:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:35.163214 | orchestrator | 2026-04-09 01:45:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:35.163272 | orchestrator | 2026-04-09 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:38.214285 | orchestrator | 2026-04-09 01:45:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:38.216724 | orchestrator | 2026-04-09 01:45:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:38.216784 | orchestrator | 2026-04-09 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:41.264099 | orchestrator | 2026-04-09 01:45:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:41.265705 | orchestrator | 2026-04-09 01:45:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:41.265861 | orchestrator | 2026-04-09 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:44.306727 | orchestrator | 2026-04-09 01:45:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:44.308691 | orchestrator | 2026-04-09 01:45:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:44.308783 | orchestrator | 2026-04-09 01:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:47.347382 | orchestrator | 2026-04-09 01:45:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:47.348219 | orchestrator | 2026-04-09 01:45:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:47.348409 | orchestrator | 2026-04-09 01:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:50.393459 | orchestrator | 2026-04-09 01:45:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:50.394785 | orchestrator | 2026-04-09 01:45:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:50.394886 | orchestrator | 2026-04-09 01:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:53.442975 | orchestrator | 2026-04-09 01:45:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:53.445004 | orchestrator | 2026-04-09 01:45:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:53.445448 | orchestrator | 2026-04-09 01:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:56.488451 | orchestrator | 2026-04-09 01:45:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:56.490893 | orchestrator | 2026-04-09 01:45:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:56.490960 | orchestrator | 2026-04-09 01:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:45:59.538885 | orchestrator | 2026-04-09 01:45:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:45:59.540071 | orchestrator | 2026-04-09 01:45:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:45:59.540227 | orchestrator | 2026-04-09 01:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:02.583658 | orchestrator | 2026-04-09 01:46:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:02.584942 | orchestrator | 2026-04-09 01:46:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:02.584977 | orchestrator | 2026-04-09 01:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:05.628138 | orchestrator | 2026-04-09 01:46:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:05.629394 | orchestrator | 2026-04-09 01:46:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:05.629439 | orchestrator | 2026-04-09 01:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:08.672120 | orchestrator | 2026-04-09 01:46:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:08.673883 | orchestrator | 2026-04-09 01:46:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:08.673948 | orchestrator | 2026-04-09 01:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:11.721215 | orchestrator | 2026-04-09 01:46:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:11.723166 | orchestrator | 2026-04-09 01:46:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:11.723397 | orchestrator | 2026-04-09 01:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:14.769183 | orchestrator | 2026-04-09 01:46:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:14.771307 | orchestrator | 2026-04-09 01:46:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:14.771553 | orchestrator | 2026-04-09 01:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:17.814157 | orchestrator | 2026-04-09 01:46:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:17.815981 | orchestrator | 2026-04-09 01:46:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:17.816158 | orchestrator | 2026-04-09 01:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:20.858581 | orchestrator | 2026-04-09 01:46:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:20.860249 | orchestrator | 2026-04-09 01:46:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:20.860307 | orchestrator | 2026-04-09 01:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:23.905592 | orchestrator | 2026-04-09 01:46:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:23.906832 | orchestrator | 2026-04-09 01:46:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:23.906957 | orchestrator | 2026-04-09 01:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:26.945921 | orchestrator | 2026-04-09 01:46:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:26.948258 | orchestrator | 2026-04-09 01:46:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:26.948402 | orchestrator | 2026-04-09 01:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:29.987789 | orchestrator | 2026-04-09 01:46:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:29.990546 | orchestrator | 2026-04-09 01:46:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:29.990708 | orchestrator | 2026-04-09 01:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:33.034307 | orchestrator | 2026-04-09 01:46:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:33.036165 | orchestrator | 2026-04-09 01:46:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:33.036256 | orchestrator | 2026-04-09 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:36.079906 | orchestrator | 2026-04-09 01:46:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:36.081937 | orchestrator | 2026-04-09 01:46:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:36.082504 | orchestrator | 2026-04-09 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:39.139395 | orchestrator | 2026-04-09 01:46:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:39.141470 | orchestrator | 2026-04-09 01:46:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:39.141520 | orchestrator | 2026-04-09 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:42.188673 | orchestrator | 2026-04-09 01:46:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:42.191140 | orchestrator | 2026-04-09 01:46:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:42.191196 | orchestrator | 2026-04-09 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:45.235095 | orchestrator | 2026-04-09 01:46:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:45.236495 | orchestrator | 2026-04-09 01:46:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:45.236593 | orchestrator | 2026-04-09 01:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:48.280716 | orchestrator | 2026-04-09 01:46:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:48.282214 | orchestrator | 2026-04-09 01:46:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:48.282655 | orchestrator | 2026-04-09 01:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:51.328926 | orchestrator | 2026-04-09 01:46:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:51.330561 | orchestrator | 2026-04-09 01:46:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:51.330650 | orchestrator | 2026-04-09 01:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:54.378333 | orchestrator | 2026-04-09 01:46:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:54.380582 | orchestrator | 2026-04-09 01:46:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:54.380722 | orchestrator | 2026-04-09 01:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:46:57.424549 | orchestrator | 2026-04-09 01:46:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:46:57.425488 | orchestrator | 2026-04-09 01:46:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:46:57.425558 | orchestrator | 2026-04-09 01:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:00.478050 | orchestrator | 2026-04-09 01:47:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:00.480170 | orchestrator | 2026-04-09 01:47:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:00.480314 | orchestrator | 2026-04-09 01:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:03.530198 | orchestrator | 2026-04-09 01:47:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:03.531958 | orchestrator | 2026-04-09 01:47:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:03.532202 | orchestrator | 2026-04-09 01:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:06.577467 | orchestrator | 2026-04-09 01:47:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:06.579639 | orchestrator | 2026-04-09 01:47:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:06.579763 | orchestrator | 2026-04-09 01:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:09.636704 | orchestrator | 2026-04-09 01:47:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:09.640097 | orchestrator | 2026-04-09 01:47:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:09.640224 | orchestrator | 2026-04-09 01:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:12.678879 | orchestrator | 2026-04-09 01:47:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:12.680702 | orchestrator | 2026-04-09 01:47:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:12.680752 | orchestrator | 2026-04-09 01:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:15.727133 | orchestrator | 2026-04-09 01:47:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:15.728324 | orchestrator | 2026-04-09 01:47:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:15.728379 | orchestrator | 2026-04-09 01:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:18.775266 | orchestrator | 2026-04-09 01:47:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:18.776551 | orchestrator | 2026-04-09 01:47:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:18.776663 | orchestrator | 2026-04-09 01:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:21.824677 | orchestrator | 2026-04-09 01:47:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:21.825512 | orchestrator | 2026-04-09 01:47:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:21.826005 | orchestrator | 2026-04-09 01:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:24.878600 | orchestrator | 2026-04-09 01:47:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:24.880501 | orchestrator | 2026-04-09 01:47:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:24.880532 | orchestrator | 2026-04-09 01:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:27.925799 | orchestrator | 2026-04-09 01:47:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:27.927767 | orchestrator | 2026-04-09 01:47:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:27.927898 | orchestrator | 2026-04-09 01:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:30.967658 | orchestrator | 2026-04-09 01:47:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:30.968932 | orchestrator | 2026-04-09 01:47:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:30.968994 | orchestrator | 2026-04-09 01:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:34.010540 | orchestrator | 2026-04-09 01:47:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:34.012203 | orchestrator | 2026-04-09 01:47:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:34.012254 | orchestrator | 2026-04-09 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:37.055905 | orchestrator | 2026-04-09 01:47:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:37.057993 | orchestrator | 2026-04-09 01:47:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:37.058100 | orchestrator | 2026-04-09 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:40.109039 | orchestrator | 2026-04-09 01:47:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:40.111487 | orchestrator | 2026-04-09 01:47:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:40.111546 | orchestrator | 2026-04-09 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:43.152053 | orchestrator | 2026-04-09 01:47:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:43.154183 | orchestrator | 2026-04-09 01:47:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:43.154234 | orchestrator | 2026-04-09 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:46.197277 | orchestrator | 2026-04-09 01:47:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:46.198797 | orchestrator | 2026-04-09 01:47:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:46.199064 | orchestrator | 2026-04-09 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:49.243526 | orchestrator | 2026-04-09 01:47:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:49.245236 | orchestrator | 2026-04-09 01:47:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:49.245304 | orchestrator | 2026-04-09 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:52.293267 | orchestrator | 2026-04-09 01:47:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:52.294200 | orchestrator | 2026-04-09 01:47:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:52.294262 | orchestrator | 2026-04-09 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:55.345026 | orchestrator | 2026-04-09 01:47:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:55.347112 | orchestrator | 2026-04-09 01:47:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:55.347208 | orchestrator | 2026-04-09 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:47:58.401253 | orchestrator | 2026-04-09 01:47:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:47:58.403178 | orchestrator | 2026-04-09 01:47:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:47:58.403351 | orchestrator | 2026-04-09 01:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:01.452548 | orchestrator | 2026-04-09 01:48:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:01.454198 | orchestrator | 2026-04-09 01:48:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:01.454251 | orchestrator | 2026-04-09 01:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:04.497529 | orchestrator | 2026-04-09 01:48:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:04.498924 | orchestrator | 2026-04-09 01:48:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:04.499085 | orchestrator | 2026-04-09 01:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:07.547384 | orchestrator | 2026-04-09 01:48:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:07.549457 | orchestrator | 2026-04-09 01:48:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:07.549530 | orchestrator | 2026-04-09 01:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:10.593590 | orchestrator | 2026-04-09 01:48:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:10.594059 | orchestrator | 2026-04-09 01:48:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:10.594087 | orchestrator | 2026-04-09 01:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:13.636837 | orchestrator | 2026-04-09 01:48:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:13.638872 | orchestrator | 2026-04-09 01:48:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:13.638954 | orchestrator | 2026-04-09 01:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:16.683181 | orchestrator | 2026-04-09 01:48:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:16.684653 | orchestrator | 2026-04-09 01:48:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:16.684701 | orchestrator | 2026-04-09 01:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:19.731568 | orchestrator | 2026-04-09 01:48:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:19.733072 | orchestrator | 2026-04-09 01:48:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:19.733394 | orchestrator | 2026-04-09 01:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:22.783796 | orchestrator | 2026-04-09 01:48:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:22.785944 | orchestrator | 2026-04-09 01:48:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:22.786100 | orchestrator | 2026-04-09 01:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:25.838468 | orchestrator | 2026-04-09 01:48:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:25.840699 | orchestrator | 2026-04-09 01:48:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:25.840731 | orchestrator | 2026-04-09 01:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:28.886680 | orchestrator | 2026-04-09 01:48:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:28.888545 | orchestrator | 2026-04-09 01:48:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:28.888583 | orchestrator | 2026-04-09 01:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:31.931903 | orchestrator | 2026-04-09 01:48:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:31.933367 | orchestrator | 2026-04-09 01:48:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:31.933522 | orchestrator | 2026-04-09 01:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:34.981058 | orchestrator | 2026-04-09 01:48:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:34.983018 | orchestrator | 2026-04-09 01:48:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:34.983155 | orchestrator | 2026-04-09 01:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:38.029165 | orchestrator | 2026-04-09 01:48:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:38.029813 | orchestrator | 2026-04-09 01:48:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:38.029925 | orchestrator | 2026-04-09 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:41.075795 | orchestrator | 2026-04-09 01:48:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:41.078738 | orchestrator | 2026-04-09 01:48:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:41.078818 | orchestrator | 2026-04-09 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:44.121759 | orchestrator | 2026-04-09 01:48:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:44.123399 | orchestrator | 2026-04-09 01:48:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:44.123683 | orchestrator | 2026-04-09 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:47.170990 | orchestrator | 2026-04-09 01:48:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:47.173115 | orchestrator | 2026-04-09 01:48:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:47.173166 | orchestrator | 2026-04-09 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:50.221186 | orchestrator | 2026-04-09 01:48:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:50.223119 | orchestrator | 2026-04-09 01:48:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:50.223170 | orchestrator | 2026-04-09 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:53.273563 | orchestrator | 2026-04-09 01:48:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:53.274232 | orchestrator | 2026-04-09 01:48:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:53.274274 | orchestrator | 2026-04-09 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:56.328068 | orchestrator | 2026-04-09 01:48:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:56.330402 | orchestrator | 2026-04-09 01:48:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:56.330532 | orchestrator | 2026-04-09 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:48:59.381454 | orchestrator | 2026-04-09 01:48:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:48:59.384328 | orchestrator | 2026-04-09 01:48:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:48:59.384561 | orchestrator | 2026-04-09 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:02.429843 | orchestrator | 2026-04-09 01:49:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:02.430927 | orchestrator | 2026-04-09 01:49:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:02.431029 | orchestrator | 2026-04-09 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:05.478072 | orchestrator | 2026-04-09 01:49:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:05.480348 | orchestrator | 2026-04-09 01:49:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:05.480393 | orchestrator | 2026-04-09 01:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:08.526659 | orchestrator | 2026-04-09 01:49:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:08.527877 | orchestrator | 2026-04-09 01:49:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:08.527935 | orchestrator | 2026-04-09 01:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:11.569791 | orchestrator | 2026-04-09 01:49:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:11.571723 | orchestrator | 2026-04-09 01:49:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:11.571778 | orchestrator | 2026-04-09 01:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:14.616414 | orchestrator | 2026-04-09 01:49:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:14.618552 | orchestrator | 2026-04-09 01:49:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:14.618804 | orchestrator | 2026-04-09 01:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:17.663776 | orchestrator | 2026-04-09 01:49:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:17.665115 | orchestrator | 2026-04-09 01:49:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:17.665211 | orchestrator | 2026-04-09 01:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:20.706205 | orchestrator | 2026-04-09 01:49:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:20.708325 | orchestrator | 2026-04-09 01:49:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:20.708497 | orchestrator | 2026-04-09 01:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:23.753097 | orchestrator | 2026-04-09 01:49:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:23.755246 | orchestrator | 2026-04-09 01:49:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:23.755301 | orchestrator | 2026-04-09 01:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:26.800981 | orchestrator | 2026-04-09 01:49:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:26.803508 | orchestrator | 2026-04-09 01:49:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:26.803660 | orchestrator | 2026-04-09 01:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:29.848933 | orchestrator | 2026-04-09 01:49:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:29.850578 | orchestrator | 2026-04-09 01:49:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:29.850592 | orchestrator | 2026-04-09 01:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:32.897366 | orchestrator | 2026-04-09 01:49:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:32.897560 | orchestrator | 2026-04-09 01:49:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:32.897575 | orchestrator | 2026-04-09 01:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:35.943746 | orchestrator | 2026-04-09 01:49:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:35.944401 | orchestrator | 2026-04-09 01:49:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:35.944556 | orchestrator | 2026-04-09 01:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:38.991074 | orchestrator | 2026-04-09 01:49:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:38.991836 | orchestrator | 2026-04-09 01:49:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:38.991878 | orchestrator | 2026-04-09 01:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:42.042173 | orchestrator | 2026-04-09 01:49:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:42.044152 | orchestrator | 2026-04-09 01:49:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:42.044389 | orchestrator | 2026-04-09 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:45.092790 | orchestrator | 2026-04-09 01:49:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:45.094939 | orchestrator | 2026-04-09 01:49:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:45.095035 | orchestrator | 2026-04-09 01:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:48.141512 | orchestrator | 2026-04-09 01:49:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:48.141583 | orchestrator | 2026-04-09 01:49:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:48.141590 | orchestrator | 2026-04-09 01:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:51.189524 | orchestrator | 2026-04-09 01:49:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:51.191689 | orchestrator | 2026-04-09 01:49:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:51.191728 | orchestrator | 2026-04-09 01:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:54.237273 | orchestrator | 2026-04-09 01:49:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:54.238937 | orchestrator | 2026-04-09 01:49:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:54.239091 | orchestrator | 2026-04-09 01:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:49:57.279345 | orchestrator | 2026-04-09 01:49:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:49:57.281323 | orchestrator | 2026-04-09 01:49:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:49:57.281376 | orchestrator | 2026-04-09 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:00.324481 | orchestrator | 2026-04-09 01:50:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:00.326473 | orchestrator | 2026-04-09 01:50:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:00.326512 | orchestrator | 2026-04-09 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:03.369396 | orchestrator | 2026-04-09 01:50:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:03.371163 | orchestrator | 2026-04-09 01:50:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:03.371574 | orchestrator | 2026-04-09 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:06.415330 | orchestrator | 2026-04-09 01:50:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:06.417667 | orchestrator | 2026-04-09 01:50:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:06.417987 | orchestrator | 2026-04-09 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:09.458378 | orchestrator | 2026-04-09 01:50:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:09.459764 | orchestrator | 2026-04-09 01:50:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:09.460025 | orchestrator | 2026-04-09 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:12.507810 | orchestrator | 2026-04-09 01:50:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:12.511281 | orchestrator | 2026-04-09 01:50:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:12.511362 | orchestrator | 2026-04-09 01:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:15.560262 | orchestrator | 2026-04-09 01:50:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:15.561718 | orchestrator | 2026-04-09 01:50:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:15.561764 | orchestrator | 2026-04-09 01:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:18.613603 | orchestrator | 2026-04-09 01:50:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:18.614637 | orchestrator | 2026-04-09 01:50:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:18.614677 | orchestrator | 2026-04-09 01:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:21.665757 | orchestrator | 2026-04-09 01:50:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:21.667670 | orchestrator | 2026-04-09 01:50:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:21.667694 | orchestrator | 2026-04-09 01:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:24.719560 | orchestrator | 2026-04-09 01:50:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:24.721666 | orchestrator | 2026-04-09 01:50:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:24.721751 | orchestrator | 2026-04-09 01:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:27.770289 | orchestrator | 2026-04-09 01:50:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:27.774432 | orchestrator | 2026-04-09 01:50:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:27.774525 | orchestrator | 2026-04-09 01:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:30.825313 | orchestrator | 2026-04-09 01:50:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:30.828110 | orchestrator | 2026-04-09 01:50:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:30.828204 | orchestrator | 2026-04-09 01:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:33.876644 | orchestrator | 2026-04-09 01:50:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:33.878281 | orchestrator | 2026-04-09 01:50:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:33.878365 | orchestrator | 2026-04-09 01:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:36.925715 | orchestrator | 2026-04-09 01:50:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:36.928255 | orchestrator | 2026-04-09 01:50:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:36.928437 | orchestrator | 2026-04-09 01:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:39.976131 | orchestrator | 2026-04-09 01:50:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:39.977813 | orchestrator | 2026-04-09 01:50:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:39.977917 | orchestrator | 2026-04-09 01:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:43.021794 | orchestrator | 2026-04-09 01:50:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:43.023003 | orchestrator | 2026-04-09 01:50:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:43.023931 | orchestrator | 2026-04-09 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:46.074971 | orchestrator | 2026-04-09 01:50:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:46.079578 | orchestrator | 2026-04-09 01:50:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:46.079650 | orchestrator | 2026-04-09 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:49.124381 | orchestrator | 2026-04-09 01:50:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:49.124836 | orchestrator | 2026-04-09 01:50:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:49.124858 | orchestrator | 2026-04-09 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:52.169566 | orchestrator | 2026-04-09 01:50:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:52.171580 | orchestrator | 2026-04-09 01:50:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:52.171620 | orchestrator | 2026-04-09 01:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:55.211842 | orchestrator | 2026-04-09 01:50:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:55.212713 | orchestrator | 2026-04-09 01:50:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:55.212952 | orchestrator | 2026-04-09 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:50:58.259667 | orchestrator | 2026-04-09 01:50:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:50:58.260872 | orchestrator | 2026-04-09 01:50:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:50:58.260908 | orchestrator | 2026-04-09 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:01.309113 | orchestrator | 2026-04-09 01:51:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:01.310347 | orchestrator | 2026-04-09 01:51:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:01.310440 | orchestrator | 2026-04-09 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:04.357159 | orchestrator | 2026-04-09 01:51:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:04.358447 | orchestrator | 2026-04-09 01:51:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:04.358516 | orchestrator | 2026-04-09 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:07.406209 | orchestrator | 2026-04-09 01:51:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:07.408105 | orchestrator | 2026-04-09 01:51:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:07.408141 | orchestrator | 2026-04-09 01:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:10.451850 | orchestrator | 2026-04-09 01:51:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:10.453729 | orchestrator | 2026-04-09 01:51:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:10.453802 | orchestrator | 2026-04-09 01:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:13.503209 | orchestrator | 2026-04-09 01:51:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:13.504895 | orchestrator | 2026-04-09 01:51:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:13.504989 | orchestrator | 2026-04-09 01:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:16.553179 | orchestrator | 2026-04-09 01:51:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:16.555062 | orchestrator | 2026-04-09 01:51:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:16.555113 | orchestrator | 2026-04-09 01:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:19.598231 | orchestrator | 2026-04-09 01:51:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:19.599098 | orchestrator | 2026-04-09 01:51:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:19.599140 | orchestrator | 2026-04-09 01:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:22.644876 | orchestrator | 2026-04-09 01:51:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:22.646658 | orchestrator | 2026-04-09 01:51:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:22.646817 | orchestrator | 2026-04-09 01:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:25.693701 | orchestrator | 2026-04-09 01:51:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:25.695753 | orchestrator | 2026-04-09 01:51:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:25.696128 | orchestrator | 2026-04-09 01:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:28.743905 | orchestrator | 2026-04-09 01:51:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:28.745523 | orchestrator | 2026-04-09 01:51:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:28.745606 | orchestrator | 2026-04-09 01:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:31.792596 | orchestrator | 2026-04-09 01:51:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:31.793350 | orchestrator | 2026-04-09 01:51:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:31.793388 | orchestrator | 2026-04-09 01:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:34.832409 | orchestrator | 2026-04-09 01:51:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:34.834363 | orchestrator | 2026-04-09 01:51:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:34.834456 | orchestrator | 2026-04-09 01:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:37.877474 | orchestrator | 2026-04-09 01:51:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:37.880783 | orchestrator | 2026-04-09 01:51:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:37.880844 | orchestrator | 2026-04-09 01:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:40.926849 | orchestrator | 2026-04-09 01:51:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:40.929791 | orchestrator | 2026-04-09 01:51:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:40.929907 | orchestrator | 2026-04-09 01:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:43.979387 | orchestrator | 2026-04-09 01:51:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:43.981163 | orchestrator | 2026-04-09 01:51:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:43.981223 | orchestrator | 2026-04-09 01:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:47.022273 | orchestrator | 2026-04-09 01:51:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:47.023750 | orchestrator | 2026-04-09 01:51:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:47.023966 | orchestrator | 2026-04-09 01:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:50.065049 | orchestrator | 2026-04-09 01:51:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:50.066840 | orchestrator | 2026-04-09 01:51:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:50.066862 | orchestrator | 2026-04-09 01:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:53.110287 | orchestrator | 2026-04-09 01:51:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:53.113045 | orchestrator | 2026-04-09 01:51:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:53.113132 | orchestrator | 2026-04-09 01:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:56.162595 | orchestrator | 2026-04-09 01:51:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:56.165252 | orchestrator | 2026-04-09 01:51:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:56.165350 | orchestrator | 2026-04-09 01:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:51:59.206581 | orchestrator | 2026-04-09 01:51:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:51:59.209058 | orchestrator | 2026-04-09 01:51:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:51:59.209136 | orchestrator | 2026-04-09 01:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:02.256521 | orchestrator | 2026-04-09 01:52:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:02.257855 | orchestrator | 2026-04-09 01:52:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:02.257951 | orchestrator | 2026-04-09 01:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:05.310693 | orchestrator | 2026-04-09 01:52:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:05.312300 | orchestrator | 2026-04-09 01:52:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:05.312411 | orchestrator | 2026-04-09 01:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:08.359041 | orchestrator | 2026-04-09 01:52:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:08.360303 | orchestrator | 2026-04-09 01:52:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:08.360567 | orchestrator | 2026-04-09 01:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:11.406885 | orchestrator | 2026-04-09 01:52:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:11.408903 | orchestrator | 2026-04-09 01:52:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:11.409052 | orchestrator | 2026-04-09 01:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:14.452216 | orchestrator | 2026-04-09 01:52:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:14.453864 | orchestrator | 2026-04-09 01:52:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:14.453925 | orchestrator | 2026-04-09 01:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:17.498946 | orchestrator | 2026-04-09 01:52:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:17.500437 | orchestrator | 2026-04-09 01:52:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:17.500536 | orchestrator | 2026-04-09 01:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:20.546851 | orchestrator | 2026-04-09 01:52:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:20.548897 | orchestrator | 2026-04-09 01:52:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:20.548962 | orchestrator | 2026-04-09 01:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:23.596655 | orchestrator | 2026-04-09 01:52:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:23.598201 | orchestrator | 2026-04-09 01:52:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:23.598257 | orchestrator | 2026-04-09 01:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:26.643628 | orchestrator | 2026-04-09 01:52:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:26.644482 | orchestrator | 2026-04-09 01:52:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:26.644727 | orchestrator | 2026-04-09 01:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:29.691242 | orchestrator | 2026-04-09 01:52:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:29.691664 | orchestrator | 2026-04-09 01:52:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:29.691891 | orchestrator | 2026-04-09 01:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:32.737456 | orchestrator | 2026-04-09 01:52:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:32.739208 | orchestrator | 2026-04-09 01:52:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:32.739399 | orchestrator | 2026-04-09 01:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:35.791379 | orchestrator | 2026-04-09 01:52:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:35.792466 | orchestrator | 2026-04-09 01:52:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:35.792727 | orchestrator | 2026-04-09 01:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:38.837037 | orchestrator | 2026-04-09 01:52:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:38.838150 | orchestrator | 2026-04-09 01:52:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:38.838206 | orchestrator | 2026-04-09 01:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:41.884588 | orchestrator | 2026-04-09 01:52:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:41.885992 | orchestrator | 2026-04-09 01:52:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:41.886043 | orchestrator | 2026-04-09 01:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:44.929933 | orchestrator | 2026-04-09 01:52:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:44.931953 | orchestrator | 2026-04-09 01:52:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:44.932025 | orchestrator | 2026-04-09 01:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:47.980432 | orchestrator | 2026-04-09 01:52:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:47.983117 | orchestrator | 2026-04-09 01:52:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:47.983157 | orchestrator | 2026-04-09 01:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:51.023876 | orchestrator | 2026-04-09 01:52:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:51.025627 | orchestrator | 2026-04-09 01:52:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:51.025695 | orchestrator | 2026-04-09 01:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:54.073018 | orchestrator | 2026-04-09 01:52:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:54.074557 | orchestrator | 2026-04-09 01:52:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:54.074608 | orchestrator | 2026-04-09 01:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:52:57.121697 | orchestrator | 2026-04-09 01:52:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:52:57.123074 | orchestrator | 2026-04-09 01:52:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:52:57.123137 | orchestrator | 2026-04-09 01:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:00.160548 | orchestrator | 2026-04-09 01:53:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:00.160799 | orchestrator | 2026-04-09 01:53:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:00.160815 | orchestrator | 2026-04-09 01:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:03.205704 | orchestrator | 2026-04-09 01:53:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:03.207298 | orchestrator | 2026-04-09 01:53:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:03.207372 | orchestrator | 2026-04-09 01:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:06.258142 | orchestrator | 2026-04-09 01:53:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:06.260605 | orchestrator | 2026-04-09 01:53:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:06.260665 | orchestrator | 2026-04-09 01:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:09.304881 | orchestrator | 2026-04-09 01:53:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:09.306878 | orchestrator | 2026-04-09 01:53:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:09.306928 | orchestrator | 2026-04-09 01:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:12.350259 | orchestrator | 2026-04-09 01:53:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:12.351344 | orchestrator | 2026-04-09 01:53:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:12.351405 | orchestrator | 2026-04-09 01:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:15.395092 | orchestrator | 2026-04-09 01:53:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:15.396496 | orchestrator | 2026-04-09 01:53:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:15.396588 | orchestrator | 2026-04-09 01:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:18.439040 | orchestrator | 2026-04-09 01:53:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:18.440690 | orchestrator | 2026-04-09 01:53:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:18.440738 | orchestrator | 2026-04-09 01:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:21.491588 | orchestrator | 2026-04-09 01:53:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:21.492545 | orchestrator | 2026-04-09 01:53:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:21.492713 | orchestrator | 2026-04-09 01:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:24.537267 | orchestrator | 2026-04-09 01:53:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:24.539372 | orchestrator | 2026-04-09 01:53:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:24.539488 | orchestrator | 2026-04-09 01:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:27.587644 | orchestrator | 2026-04-09 01:53:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:27.590231 | orchestrator | 2026-04-09 01:53:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:27.590285 | orchestrator | 2026-04-09 01:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:30.634158 | orchestrator | 2026-04-09 01:53:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:30.635858 | orchestrator | 2026-04-09 01:53:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:30.635986 | orchestrator | 2026-04-09 01:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:33.680031 | orchestrator | 2026-04-09 01:53:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:33.681690 | orchestrator | 2026-04-09 01:53:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:33.681761 | orchestrator | 2026-04-09 01:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:36.733574 | orchestrator | 2026-04-09 01:53:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:36.735592 | orchestrator | 2026-04-09 01:53:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:36.735629 | orchestrator | 2026-04-09 01:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:39.783280 | orchestrator | 2026-04-09 01:53:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:39.785322 | orchestrator | 2026-04-09 01:53:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:39.785389 | orchestrator | 2026-04-09 01:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:42.833650 | orchestrator | 2026-04-09 01:53:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:42.834992 | orchestrator | 2026-04-09 01:53:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:42.835030 | orchestrator | 2026-04-09 01:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:45.886235 | orchestrator | 2026-04-09 01:53:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:45.887708 | orchestrator | 2026-04-09 01:53:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:45.887801 | orchestrator | 2026-04-09 01:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:48.931818 | orchestrator | 2026-04-09 01:53:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:48.934060 | orchestrator | 2026-04-09 01:53:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:48.934104 | orchestrator | 2026-04-09 01:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:51.976783 | orchestrator | 2026-04-09 01:53:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:51.978315 | orchestrator | 2026-04-09 01:53:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:51.978378 | orchestrator | 2026-04-09 01:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:55.022452 | orchestrator | 2026-04-09 01:53:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:55.024935 | orchestrator | 2026-04-09 01:53:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:55.025036 | orchestrator | 2026-04-09 01:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:53:58.078253 | orchestrator | 2026-04-09 01:53:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:53:58.079284 | orchestrator | 2026-04-09 01:53:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:53:58.079393 | orchestrator | 2026-04-09 01:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:01.127701 | orchestrator | 2026-04-09 01:54:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:01.130374 | orchestrator | 2026-04-09 01:54:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:01.130655 | orchestrator | 2026-04-09 01:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:04.178228 | orchestrator | 2026-04-09 01:54:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:04.180124 | orchestrator | 2026-04-09 01:54:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:04.180207 | orchestrator | 2026-04-09 01:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:07.231383 | orchestrator | 2026-04-09 01:54:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:07.233222 | orchestrator | 2026-04-09 01:54:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:07.233330 | orchestrator | 2026-04-09 01:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:10.282512 | orchestrator | 2026-04-09 01:54:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:10.285107 | orchestrator | 2026-04-09 01:54:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:10.285235 | orchestrator | 2026-04-09 01:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:13.333423 | orchestrator | 2026-04-09 01:54:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:13.337799 | orchestrator | 2026-04-09 01:54:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:13.337854 | orchestrator | 2026-04-09 01:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:16.387708 | orchestrator | 2026-04-09 01:54:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:16.389608 | orchestrator | 2026-04-09 01:54:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:16.389666 | orchestrator | 2026-04-09 01:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:19.443715 | orchestrator | 2026-04-09 01:54:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:19.445014 | orchestrator | 2026-04-09 01:54:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:19.445050 | orchestrator | 2026-04-09 01:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:22.489972 | orchestrator | 2026-04-09 01:54:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:22.491631 | orchestrator | 2026-04-09 01:54:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:22.491666 | orchestrator | 2026-04-09 01:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:25.540683 | orchestrator | 2026-04-09 01:54:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:25.540777 | orchestrator | 2026-04-09 01:54:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:25.540787 | orchestrator | 2026-04-09 01:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:28.595873 | orchestrator | 2026-04-09 01:54:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:28.597257 | orchestrator | 2026-04-09 01:54:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:28.597385 | orchestrator | 2026-04-09 01:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:31.654794 | orchestrator | 2026-04-09 01:54:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:31.657984 | orchestrator | 2026-04-09 01:54:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:31.658134 | orchestrator | 2026-04-09 01:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:34.714778 | orchestrator | 2026-04-09 01:54:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:34.715978 | orchestrator | 2026-04-09 01:54:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:34.716037 | orchestrator | 2026-04-09 01:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:37.773051 | orchestrator | 2026-04-09 01:54:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:37.776600 | orchestrator | 2026-04-09 01:54:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:37.776667 | orchestrator | 2026-04-09 01:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:40.824866 | orchestrator | 2026-04-09 01:54:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:40.824984 | orchestrator | 2026-04-09 01:54:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:40.825006 | orchestrator | 2026-04-09 01:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:43.888394 | orchestrator | 2026-04-09 01:54:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:43.888494 | orchestrator | 2026-04-09 01:54:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:43.888508 | orchestrator | 2026-04-09 01:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:46.958829 | orchestrator | 2026-04-09 01:54:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:46.960922 | orchestrator | 2026-04-09 01:54:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:46.961014 | orchestrator | 2026-04-09 01:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:50.054066 | orchestrator | 2026-04-09 01:54:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:50.056427 | orchestrator | 2026-04-09 01:54:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:50.056515 | orchestrator | 2026-04-09 01:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:53.114486 | orchestrator | 2026-04-09 01:54:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:53.116004 | orchestrator | 2026-04-09 01:54:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:53.116069 | orchestrator | 2026-04-09 01:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:56.162923 | orchestrator | 2026-04-09 01:54:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:56.164741 | orchestrator | 2026-04-09 01:54:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:56.164864 | orchestrator | 2026-04-09 01:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:54:59.212126 | orchestrator | 2026-04-09 01:54:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:54:59.213838 | orchestrator | 2026-04-09 01:54:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:54:59.213880 | orchestrator | 2026-04-09 01:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:02.255202 | orchestrator | 2026-04-09 01:55:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:02.256852 | orchestrator | 2026-04-09 01:55:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:02.256978 | orchestrator | 2026-04-09 01:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:05.299909 | orchestrator | 2026-04-09 01:55:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:05.301314 | orchestrator | 2026-04-09 01:55:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:05.301554 | orchestrator | 2026-04-09 01:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:08.343651 | orchestrator | 2026-04-09 01:55:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:08.345188 | orchestrator | 2026-04-09 01:55:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:08.345207 | orchestrator | 2026-04-09 01:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:11.395412 | orchestrator | 2026-04-09 01:55:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:11.397342 | orchestrator | 2026-04-09 01:55:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:11.397397 | orchestrator | 2026-04-09 01:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:14.444012 | orchestrator | 2026-04-09 01:55:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:14.445215 | orchestrator | 2026-04-09 01:55:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:14.445270 | orchestrator | 2026-04-09 01:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:17.488263 | orchestrator | 2026-04-09 01:55:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:17.490282 | orchestrator | 2026-04-09 01:55:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:17.490377 | orchestrator | 2026-04-09 01:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:20.539704 | orchestrator | 2026-04-09 01:55:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:20.541503 | orchestrator | 2026-04-09 01:55:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:20.541580 | orchestrator | 2026-04-09 01:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:23.581417 | orchestrator | 2026-04-09 01:55:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:23.582759 | orchestrator | 2026-04-09 01:55:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:23.582807 | orchestrator | 2026-04-09 01:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:26.631383 | orchestrator | 2026-04-09 01:55:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:26.633245 | orchestrator | 2026-04-09 01:55:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:26.633302 | orchestrator | 2026-04-09 01:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:29.682532 | orchestrator | 2026-04-09 01:55:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:29.683615 | orchestrator | 2026-04-09 01:55:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:29.683684 | orchestrator | 2026-04-09 01:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:32.732320 | orchestrator | 2026-04-09 01:55:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:32.734143 | orchestrator | 2026-04-09 01:55:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:32.734250 | orchestrator | 2026-04-09 01:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:35.780974 | orchestrator | 2026-04-09 01:55:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:35.782949 | orchestrator | 2026-04-09 01:55:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:35.783009 | orchestrator | 2026-04-09 01:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:38.830807 | orchestrator | 2026-04-09 01:55:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:38.832676 | orchestrator | 2026-04-09 01:55:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:38.832793 | orchestrator | 2026-04-09 01:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:41.884587 | orchestrator | 2026-04-09 01:55:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:41.885908 | orchestrator | 2026-04-09 01:55:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:41.885961 | orchestrator | 2026-04-09 01:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:44.932492 | orchestrator | 2026-04-09 01:55:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:44.934677 | orchestrator | 2026-04-09 01:55:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:44.934775 | orchestrator | 2026-04-09 01:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:47.977227 | orchestrator | 2026-04-09 01:55:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:47.979689 | orchestrator | 2026-04-09 01:55:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:47.979764 | orchestrator | 2026-04-09 01:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:51.029301 | orchestrator | 2026-04-09 01:55:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:51.030132 | orchestrator | 2026-04-09 01:55:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:51.030151 | orchestrator | 2026-04-09 01:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:54.077928 | orchestrator | 2026-04-09 01:55:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:54.080688 | orchestrator | 2026-04-09 01:55:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:54.080791 | orchestrator | 2026-04-09 01:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:55:57.130318 | orchestrator | 2026-04-09 01:55:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:55:57.132964 | orchestrator | 2026-04-09 01:55:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:55:57.133043 | orchestrator | 2026-04-09 01:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:00.179568 | orchestrator | 2026-04-09 01:56:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:00.182315 | orchestrator | 2026-04-09 01:56:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:00.182416 | orchestrator | 2026-04-09 01:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:03.229157 | orchestrator | 2026-04-09 01:56:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:03.230095 | orchestrator | 2026-04-09 01:56:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:03.230169 | orchestrator | 2026-04-09 01:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:06.278749 | orchestrator | 2026-04-09 01:56:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:06.280194 | orchestrator | 2026-04-09 01:56:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:06.280243 | orchestrator | 2026-04-09 01:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:09.325435 | orchestrator | 2026-04-09 01:56:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:09.326512 | orchestrator | 2026-04-09 01:56:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:09.326573 | orchestrator | 2026-04-09 01:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:12.374196 | orchestrator | 2026-04-09 01:56:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:12.375374 | orchestrator | 2026-04-09 01:56:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:12.375483 | orchestrator | 2026-04-09 01:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:15.428200 | orchestrator | 2026-04-09 01:56:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:15.430352 | orchestrator | 2026-04-09 01:56:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:15.430419 | orchestrator | 2026-04-09 01:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:18.474501 | orchestrator | 2026-04-09 01:56:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:18.476262 | orchestrator | 2026-04-09 01:56:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:18.476420 | orchestrator | 2026-04-09 01:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:21.517084 | orchestrator | 2026-04-09 01:56:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:21.519342 | orchestrator | 2026-04-09 01:56:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:21.519390 | orchestrator | 2026-04-09 01:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:24.565862 | orchestrator | 2026-04-09 01:56:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:24.567839 | orchestrator | 2026-04-09 01:56:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:24.567923 | orchestrator | 2026-04-09 01:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:27.620548 | orchestrator | 2026-04-09 01:56:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:27.621590 | orchestrator | 2026-04-09 01:56:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:27.621789 | orchestrator | 2026-04-09 01:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:30.666589 | orchestrator | 2026-04-09 01:56:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:30.669743 | orchestrator | 2026-04-09 01:56:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:30.669814 | orchestrator | 2026-04-09 01:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:33.710420 | orchestrator | 2026-04-09 01:56:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:33.710784 | orchestrator | 2026-04-09 01:56:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:33.710825 | orchestrator | 2026-04-09 01:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:36.754371 | orchestrator | 2026-04-09 01:56:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:36.755798 | orchestrator | 2026-04-09 01:56:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:36.755918 | orchestrator | 2026-04-09 01:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:39.803273 | orchestrator | 2026-04-09 01:56:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:39.805109 | orchestrator | 2026-04-09 01:56:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:39.805186 | orchestrator | 2026-04-09 01:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:42.853722 | orchestrator | 2026-04-09 01:56:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:42.855998 | orchestrator | 2026-04-09 01:56:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:42.856570 | orchestrator | 2026-04-09 01:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:45.904938 | orchestrator | 2026-04-09 01:56:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:45.907071 | orchestrator | 2026-04-09 01:56:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:45.907145 | orchestrator | 2026-04-09 01:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:48.952434 | orchestrator | 2026-04-09 01:56:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:48.954109 | orchestrator | 2026-04-09 01:56:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:48.954173 | orchestrator | 2026-04-09 01:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:52.002962 | orchestrator | 2026-04-09 01:56:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:52.006431 | orchestrator | 2026-04-09 01:56:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:52.006605 | orchestrator | 2026-04-09 01:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:55.053578 | orchestrator | 2026-04-09 01:56:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:55.056353 | orchestrator | 2026-04-09 01:56:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:55.056406 | orchestrator | 2026-04-09 01:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:56:58.110141 | orchestrator | 2026-04-09 01:56:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:56:58.111343 | orchestrator | 2026-04-09 01:56:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:56:58.111386 | orchestrator | 2026-04-09 01:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:01.150411 | orchestrator | 2026-04-09 01:57:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:01.151144 | orchestrator | 2026-04-09 01:57:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:01.151162 | orchestrator | 2026-04-09 01:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:04.199949 | orchestrator | 2026-04-09 01:57:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:04.202260 | orchestrator | 2026-04-09 01:57:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:04.202318 | orchestrator | 2026-04-09 01:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:07.246686 | orchestrator | 2026-04-09 01:57:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:07.248537 | orchestrator | 2026-04-09 01:57:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:07.248567 | orchestrator | 2026-04-09 01:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:10.299168 | orchestrator | 2026-04-09 01:57:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:10.302182 | orchestrator | 2026-04-09 01:57:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:10.302467 | orchestrator | 2026-04-09 01:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:13.353495 | orchestrator | 2026-04-09 01:57:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:13.354093 | orchestrator | 2026-04-09 01:57:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:13.354144 | orchestrator | 2026-04-09 01:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:16.399525 | orchestrator | 2026-04-09 01:57:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:16.400815 | orchestrator | 2026-04-09 01:57:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:16.400864 | orchestrator | 2026-04-09 01:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:19.451310 | orchestrator | 2026-04-09 01:57:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:19.451509 | orchestrator | 2026-04-09 01:57:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:19.452083 | orchestrator | 2026-04-09 01:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:22.502135 | orchestrator | 2026-04-09 01:57:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:22.504246 | orchestrator | 2026-04-09 01:57:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:22.504453 | orchestrator | 2026-04-09 01:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:25.549796 | orchestrator | 2026-04-09 01:57:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:25.551917 | orchestrator | 2026-04-09 01:57:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:25.552002 | orchestrator | 2026-04-09 01:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:28.597363 | orchestrator | 2026-04-09 01:57:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:28.598083 | orchestrator | 2026-04-09 01:57:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:28.598239 | orchestrator | 2026-04-09 01:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:31.643800 | orchestrator | 2026-04-09 01:57:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:31.645230 | orchestrator | 2026-04-09 01:57:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:31.645281 | orchestrator | 2026-04-09 01:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:34.691523 | orchestrator | 2026-04-09 01:57:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:34.692934 | orchestrator | 2026-04-09 01:57:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:34.692995 | orchestrator | 2026-04-09 01:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:37.737584 | orchestrator | 2026-04-09 01:57:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:37.740023 | orchestrator | 2026-04-09 01:57:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:37.740130 | orchestrator | 2026-04-09 01:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:40.782183 | orchestrator | 2026-04-09 01:57:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:40.784789 | orchestrator | 2026-04-09 01:57:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:40.784834 | orchestrator | 2026-04-09 01:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:43.831731 | orchestrator | 2026-04-09 01:57:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:43.833614 | orchestrator | 2026-04-09 01:57:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:43.833687 | orchestrator | 2026-04-09 01:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:46.891102 | orchestrator | 2026-04-09 01:57:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:46.892279 | orchestrator | 2026-04-09 01:57:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:46.892338 | orchestrator | 2026-04-09 01:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:49.937142 | orchestrator | 2026-04-09 01:57:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:49.940093 | orchestrator | 2026-04-09 01:57:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:49.940144 | orchestrator | 2026-04-09 01:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:52.988602 | orchestrator | 2026-04-09 01:57:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:52.990265 | orchestrator | 2026-04-09 01:57:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:52.990459 | orchestrator | 2026-04-09 01:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:56.043570 | orchestrator | 2026-04-09 01:57:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:56.045139 | orchestrator | 2026-04-09 01:57:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:56.045185 | orchestrator | 2026-04-09 01:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:57:59.090222 | orchestrator | 2026-04-09 01:57:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:57:59.091009 | orchestrator | 2026-04-09 01:57:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:57:59.091084 | orchestrator | 2026-04-09 01:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:02.133279 | orchestrator | 2026-04-09 01:58:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:02.134403 | orchestrator | 2026-04-09 01:58:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:02.134476 | orchestrator | 2026-04-09 01:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:05.178533 | orchestrator | 2026-04-09 01:58:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:05.179583 | orchestrator | 2026-04-09 01:58:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:05.179666 | orchestrator | 2026-04-09 01:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:08.228856 | orchestrator | 2026-04-09 01:58:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:08.231035 | orchestrator | 2026-04-09 01:58:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:08.231139 | orchestrator | 2026-04-09 01:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:11.275559 | orchestrator | 2026-04-09 01:58:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:11.276972 | orchestrator | 2026-04-09 01:58:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:11.277307 | orchestrator | 2026-04-09 01:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:14.325665 | orchestrator | 2026-04-09 01:58:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:14.327603 | orchestrator | 2026-04-09 01:58:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:14.327639 | orchestrator | 2026-04-09 01:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:17.370425 | orchestrator | 2026-04-09 01:58:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:17.371970 | orchestrator | 2026-04-09 01:58:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:17.372034 | orchestrator | 2026-04-09 01:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:20.419079 | orchestrator | 2026-04-09 01:58:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:20.420384 | orchestrator | 2026-04-09 01:58:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:20.420425 | orchestrator | 2026-04-09 01:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:23.467920 | orchestrator | 2026-04-09 01:58:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:23.469358 | orchestrator | 2026-04-09 01:58:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:23.469410 | orchestrator | 2026-04-09 01:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:26.518859 | orchestrator | 2026-04-09 01:58:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:26.520102 | orchestrator | 2026-04-09 01:58:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:26.520155 | orchestrator | 2026-04-09 01:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:29.570086 | orchestrator | 2026-04-09 01:58:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:29.572254 | orchestrator | 2026-04-09 01:58:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:29.572335 | orchestrator | 2026-04-09 01:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:32.621857 | orchestrator | 2026-04-09 01:58:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:32.623499 | orchestrator | 2026-04-09 01:58:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:32.623552 | orchestrator | 2026-04-09 01:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:35.671148 | orchestrator | 2026-04-09 01:58:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:35.673130 | orchestrator | 2026-04-09 01:58:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:35.673191 | orchestrator | 2026-04-09 01:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:38.718392 | orchestrator | 2026-04-09 01:58:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:38.719903 | orchestrator | 2026-04-09 01:58:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:38.719964 | orchestrator | 2026-04-09 01:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:41.766377 | orchestrator | 2026-04-09 01:58:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:41.768274 | orchestrator | 2026-04-09 01:58:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:41.768321 | orchestrator | 2026-04-09 01:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:44.816964 | orchestrator | 2026-04-09 01:58:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:44.818479 | orchestrator | 2026-04-09 01:58:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:44.818531 | orchestrator | 2026-04-09 01:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:47.865496 | orchestrator | 2026-04-09 01:58:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:47.868073 | orchestrator | 2026-04-09 01:58:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:47.868578 | orchestrator | 2026-04-09 01:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:50.914444 | orchestrator | 2026-04-09 01:58:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:50.916686 | orchestrator | 2026-04-09 01:58:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:50.916863 | orchestrator | 2026-04-09 01:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:53.965331 | orchestrator | 2026-04-09 01:58:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:53.969279 | orchestrator | 2026-04-09 01:58:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:53.969349 | orchestrator | 2026-04-09 01:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:58:57.013561 | orchestrator | 2026-04-09 01:58:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:58:57.015283 | orchestrator | 2026-04-09 01:58:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:58:57.015401 | orchestrator | 2026-04-09 01:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:00.054438 | orchestrator | 2026-04-09 01:59:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:00.056914 | orchestrator | 2026-04-09 01:59:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:00.056976 | orchestrator | 2026-04-09 01:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:03.109447 | orchestrator | 2026-04-09 01:59:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:03.110274 | orchestrator | 2026-04-09 01:59:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:03.110370 | orchestrator | 2026-04-09 01:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:06.155406 | orchestrator | 2026-04-09 01:59:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:06.156758 | orchestrator | 2026-04-09 01:59:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:06.156817 | orchestrator | 2026-04-09 01:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:09.204040 | orchestrator | 2026-04-09 01:59:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:09.206426 | orchestrator | 2026-04-09 01:59:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:09.206495 | orchestrator | 2026-04-09 01:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:12.252290 | orchestrator | 2026-04-09 01:59:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:12.253396 | orchestrator | 2026-04-09 01:59:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:12.253444 | orchestrator | 2026-04-09 01:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:15.302510 | orchestrator | 2026-04-09 01:59:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:15.305092 | orchestrator | 2026-04-09 01:59:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:15.305179 | orchestrator | 2026-04-09 01:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:18.356395 | orchestrator | 2026-04-09 01:59:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:18.358583 | orchestrator | 2026-04-09 01:59:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:18.358651 | orchestrator | 2026-04-09 01:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:21.403186 | orchestrator | 2026-04-09 01:59:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:21.405556 | orchestrator | 2026-04-09 01:59:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:21.405840 | orchestrator | 2026-04-09 01:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:24.449408 | orchestrator | 2026-04-09 01:59:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:24.450863 | orchestrator | 2026-04-09 01:59:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:24.451025 | orchestrator | 2026-04-09 01:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:27.492599 | orchestrator | 2026-04-09 01:59:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:27.493767 | orchestrator | 2026-04-09 01:59:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:27.493859 | orchestrator | 2026-04-09 01:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:30.535360 | orchestrator | 2026-04-09 01:59:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:30.535490 | orchestrator | 2026-04-09 01:59:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:30.535501 | orchestrator | 2026-04-09 01:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:33.584086 | orchestrator | 2026-04-09 01:59:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:33.585780 | orchestrator | 2026-04-09 01:59:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:33.585924 | orchestrator | 2026-04-09 01:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:36.630793 | orchestrator | 2026-04-09 01:59:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:36.632130 | orchestrator | 2026-04-09 01:59:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:36.632163 | orchestrator | 2026-04-09 01:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:39.681525 | orchestrator | 2026-04-09 01:59:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:39.682534 | orchestrator | 2026-04-09 01:59:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:39.682668 | orchestrator | 2026-04-09 01:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:42.723609 | orchestrator | 2026-04-09 01:59:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:42.725611 | orchestrator | 2026-04-09 01:59:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:42.725751 | orchestrator | 2026-04-09 01:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:45.774330 | orchestrator | 2026-04-09 01:59:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:45.775480 | orchestrator | 2026-04-09 01:59:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:45.775906 | orchestrator | 2026-04-09 01:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:48.828802 | orchestrator | 2026-04-09 01:59:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:48.829810 | orchestrator | 2026-04-09 01:59:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:48.829863 | orchestrator | 2026-04-09 01:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:51.876429 | orchestrator | 2026-04-09 01:59:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:51.879840 | orchestrator | 2026-04-09 01:59:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:51.879912 | orchestrator | 2026-04-09 01:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:54.925001 | orchestrator | 2026-04-09 01:59:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:54.927037 | orchestrator | 2026-04-09 01:59:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:54.927108 | orchestrator | 2026-04-09 01:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 01:59:57.970220 | orchestrator | 2026-04-09 01:59:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 01:59:57.972324 | orchestrator | 2026-04-09 01:59:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 01:59:57.972411 | orchestrator | 2026-04-09 01:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:01.025824 | orchestrator | 2026-04-09 02:00:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:01.027583 | orchestrator | 2026-04-09 02:00:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:01.027664 | orchestrator | 2026-04-09 02:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:04.075970 | orchestrator | 2026-04-09 02:00:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:04.077107 | orchestrator | 2026-04-09 02:00:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:04.077153 | orchestrator | 2026-04-09 02:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:07.123055 | orchestrator | 2026-04-09 02:00:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:07.124449 | orchestrator | 2026-04-09 02:00:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:07.124611 | orchestrator | 2026-04-09 02:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:10.167265 | orchestrator | 2026-04-09 02:00:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:10.169153 | orchestrator | 2026-04-09 02:00:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:10.169212 | orchestrator | 2026-04-09 02:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:13.219340 | orchestrator | 2026-04-09 02:00:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:13.221053 | orchestrator | 2026-04-09 02:00:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:13.221108 | orchestrator | 2026-04-09 02:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:16.268525 | orchestrator | 2026-04-09 02:00:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:16.270656 | orchestrator | 2026-04-09 02:00:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:16.270849 | orchestrator | 2026-04-09 02:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:19.314975 | orchestrator | 2026-04-09 02:00:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:19.316262 | orchestrator | 2026-04-09 02:00:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:19.316345 | orchestrator | 2026-04-09 02:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:22.357323 | orchestrator | 2026-04-09 02:00:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:22.359365 | orchestrator | 2026-04-09 02:00:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:22.359502 | orchestrator | 2026-04-09 02:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:25.403488 | orchestrator | 2026-04-09 02:00:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:25.405300 | orchestrator | 2026-04-09 02:00:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:25.405349 | orchestrator | 2026-04-09 02:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:28.447302 | orchestrator | 2026-04-09 02:00:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:28.448692 | orchestrator | 2026-04-09 02:00:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:28.448800 | orchestrator | 2026-04-09 02:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:31.488338 | orchestrator | 2026-04-09 02:00:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:31.488961 | orchestrator | 2026-04-09 02:00:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:31.489194 | orchestrator | 2026-04-09 02:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:34.537270 | orchestrator | 2026-04-09 02:00:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:34.539140 | orchestrator | 2026-04-09 02:00:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:34.539226 | orchestrator | 2026-04-09 02:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:37.580079 | orchestrator | 2026-04-09 02:00:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:37.582133 | orchestrator | 2026-04-09 02:00:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:37.582285 | orchestrator | 2026-04-09 02:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:40.630322 | orchestrator | 2026-04-09 02:00:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:40.631926 | orchestrator | 2026-04-09 02:00:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:40.631999 | orchestrator | 2026-04-09 02:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:43.686668 | orchestrator | 2026-04-09 02:00:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:43.688109 | orchestrator | 2026-04-09 02:00:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:43.688158 | orchestrator | 2026-04-09 02:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:46.739139 | orchestrator | 2026-04-09 02:00:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:46.739859 | orchestrator | 2026-04-09 02:00:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:46.739897 | orchestrator | 2026-04-09 02:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:49.788274 | orchestrator | 2026-04-09 02:00:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:49.790907 | orchestrator | 2026-04-09 02:00:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:49.790987 | orchestrator | 2026-04-09 02:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:52.838572 | orchestrator | 2026-04-09 02:00:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:52.840682 | orchestrator | 2026-04-09 02:00:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:52.840792 | orchestrator | 2026-04-09 02:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:55.890992 | orchestrator | 2026-04-09 02:00:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:55.894811 | orchestrator | 2026-04-09 02:00:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:55.895715 | orchestrator | 2026-04-09 02:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:00:58.949786 | orchestrator | 2026-04-09 02:00:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:00:58.951656 | orchestrator | 2026-04-09 02:00:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:00:58.951713 | orchestrator | 2026-04-09 02:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:01.997491 | orchestrator | 2026-04-09 02:01:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:01.999210 | orchestrator | 2026-04-09 02:01:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:01.999348 | orchestrator | 2026-04-09 02:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:05.058398 | orchestrator | 2026-04-09 02:01:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:05.060337 | orchestrator | 2026-04-09 02:01:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:05.060391 | orchestrator | 2026-04-09 02:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:08.105481 | orchestrator | 2026-04-09 02:01:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:08.106972 | orchestrator | 2026-04-09 02:01:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:08.107136 | orchestrator | 2026-04-09 02:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:11.156210 | orchestrator | 2026-04-09 02:01:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:11.157950 | orchestrator | 2026-04-09 02:01:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:11.158128 | orchestrator | 2026-04-09 02:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:14.213686 | orchestrator | 2026-04-09 02:01:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:14.216434 | orchestrator | 2026-04-09 02:01:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:14.216510 | orchestrator | 2026-04-09 02:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:17.257585 | orchestrator | 2026-04-09 02:01:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:17.259052 | orchestrator | 2026-04-09 02:01:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:17.259127 | orchestrator | 2026-04-09 02:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:20.307089 | orchestrator | 2026-04-09 02:01:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:20.307975 | orchestrator | 2026-04-09 02:01:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:20.308093 | orchestrator | 2026-04-09 02:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:23.353836 | orchestrator | 2026-04-09 02:01:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:23.355845 | orchestrator | 2026-04-09 02:01:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:23.355897 | orchestrator | 2026-04-09 02:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:26.403131 | orchestrator | 2026-04-09 02:01:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:26.404605 | orchestrator | 2026-04-09 02:01:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:26.404827 | orchestrator | 2026-04-09 02:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:29.451856 | orchestrator | 2026-04-09 02:01:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:29.453413 | orchestrator | 2026-04-09 02:01:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:29.453477 | orchestrator | 2026-04-09 02:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:32.498383 | orchestrator | 2026-04-09 02:01:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:32.500741 | orchestrator | 2026-04-09 02:01:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:32.500900 | orchestrator | 2026-04-09 02:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:35.544769 | orchestrator | 2026-04-09 02:01:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:35.546075 | orchestrator | 2026-04-09 02:01:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:35.546113 | orchestrator | 2026-04-09 02:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:38.587069 | orchestrator | 2026-04-09 02:01:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:38.588706 | orchestrator | 2026-04-09 02:01:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:38.588849 | orchestrator | 2026-04-09 02:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:41.629280 | orchestrator | 2026-04-09 02:01:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:41.630889 | orchestrator | 2026-04-09 02:01:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:41.630948 | orchestrator | 2026-04-09 02:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:44.674447 | orchestrator | 2026-04-09 02:01:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:44.676028 | orchestrator | 2026-04-09 02:01:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:44.676080 | orchestrator | 2026-04-09 02:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:47.722567 | orchestrator | 2026-04-09 02:01:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:47.725191 | orchestrator | 2026-04-09 02:01:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:47.725242 | orchestrator | 2026-04-09 02:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:50.766518 | orchestrator | 2026-04-09 02:01:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:50.768143 | orchestrator | 2026-04-09 02:01:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:50.768210 | orchestrator | 2026-04-09 02:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:53.815854 | orchestrator | 2026-04-09 02:01:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:53.817622 | orchestrator | 2026-04-09 02:01:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:53.817684 | orchestrator | 2026-04-09 02:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:56.871548 | orchestrator | 2026-04-09 02:01:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:56.873452 | orchestrator | 2026-04-09 02:01:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:56.873520 | orchestrator | 2026-04-09 02:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:01:59.924606 | orchestrator | 2026-04-09 02:01:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:01:59.925896 | orchestrator | 2026-04-09 02:01:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:01:59.925945 | orchestrator | 2026-04-09 02:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:02.971389 | orchestrator | 2026-04-09 02:02:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:02.973295 | orchestrator | 2026-04-09 02:02:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:02.973337 | orchestrator | 2026-04-09 02:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:06.027579 | orchestrator | 2026-04-09 02:02:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:06.029557 | orchestrator | 2026-04-09 02:02:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:06.029702 | orchestrator | 2026-04-09 02:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:09.076456 | orchestrator | 2026-04-09 02:02:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:09.077602 | orchestrator | 2026-04-09 02:02:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:09.077658 | orchestrator | 2026-04-09 02:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:12.122852 | orchestrator | 2026-04-09 02:02:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:12.124479 | orchestrator | 2026-04-09 02:02:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:12.124551 | orchestrator | 2026-04-09 02:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:15.165228 | orchestrator | 2026-04-09 02:02:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:15.166456 | orchestrator | 2026-04-09 02:02:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:15.166508 | orchestrator | 2026-04-09 02:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:18.215907 | orchestrator | 2026-04-09 02:02:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:18.217885 | orchestrator | 2026-04-09 02:02:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:18.217923 | orchestrator | 2026-04-09 02:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:21.265601 | orchestrator | 2026-04-09 02:02:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:21.266986 | orchestrator | 2026-04-09 02:02:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:21.267062 | orchestrator | 2026-04-09 02:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:24.314549 | orchestrator | 2026-04-09 02:02:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:24.316121 | orchestrator | 2026-04-09 02:02:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:24.316217 | orchestrator | 2026-04-09 02:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:27.363927 | orchestrator | 2026-04-09 02:02:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:27.365259 | orchestrator | 2026-04-09 02:02:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:27.365368 | orchestrator | 2026-04-09 02:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:02:30.409532 | orchestrator | 2026-04-09 02:02:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:02:30.411187 | orchestrator | 2026-04-09 02:02:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:02:30.411255 | orchestrator | 2026-04-09 02:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:33.539083 | orchestrator | 2026-04-09 02:04:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:33.539205 | orchestrator | 2026-04-09 02:04:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:33.539222 | orchestrator | 2026-04-09 02:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:36.587348 | orchestrator | 2026-04-09 02:04:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:36.589374 | orchestrator | 2026-04-09 02:04:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:36.589429 | orchestrator | 2026-04-09 02:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:39.649950 | orchestrator | 2026-04-09 02:04:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:39.650976 | orchestrator | 2026-04-09 02:04:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:39.651010 | orchestrator | 2026-04-09 02:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:42.696578 | orchestrator | 2026-04-09 02:04:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:42.698417 | orchestrator | 2026-04-09 02:04:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:42.698498 | orchestrator | 2026-04-09 02:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:45.742507 | orchestrator | 2026-04-09 02:04:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:45.743632 | orchestrator | 2026-04-09 02:04:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:45.743685 | orchestrator | 2026-04-09 02:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:48.787153 | orchestrator | 2026-04-09 02:04:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:48.788606 | orchestrator | 2026-04-09 02:04:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:48.788688 | orchestrator | 2026-04-09 02:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:51.828299 | orchestrator | 2026-04-09 02:04:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:51.830923 | orchestrator | 2026-04-09 02:04:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:51.830987 | orchestrator | 2026-04-09 02:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:54.874429 | orchestrator | 2026-04-09 02:04:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:54.876158 | orchestrator | 2026-04-09 02:04:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:54.876240 | orchestrator | 2026-04-09 02:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:04:57.918528 | orchestrator | 2026-04-09 02:04:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:04:57.921424 | orchestrator | 2026-04-09 02:04:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:04:57.921508 | orchestrator | 2026-04-09 02:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:00.963901 | orchestrator | 2026-04-09 02:05:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:00.968054 | orchestrator | 2026-04-09 02:05:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:00.968138 | orchestrator | 2026-04-09 02:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:04.010670 | orchestrator | 2026-04-09 02:05:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:04.012612 | orchestrator | 2026-04-09 02:05:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:04.012740 | orchestrator | 2026-04-09 02:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:07.067693 | orchestrator | 2026-04-09 02:05:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:07.069622 | orchestrator | 2026-04-09 02:05:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:07.069695 | orchestrator | 2026-04-09 02:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:10.114118 | orchestrator | 2026-04-09 02:05:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:10.115799 | orchestrator | 2026-04-09 02:05:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:10.115894 | orchestrator | 2026-04-09 02:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:13.161025 | orchestrator | 2026-04-09 02:05:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:13.161994 | orchestrator | 2026-04-09 02:05:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:13.162137 | orchestrator | 2026-04-09 02:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:16.206212 | orchestrator | 2026-04-09 02:05:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:16.208033 | orchestrator | 2026-04-09 02:05:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:16.208109 | orchestrator | 2026-04-09 02:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:19.250164 | orchestrator | 2026-04-09 02:05:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:19.252227 | orchestrator | 2026-04-09 02:05:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:19.252423 | orchestrator | 2026-04-09 02:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:22.300265 | orchestrator | 2026-04-09 02:05:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:22.303082 | orchestrator | 2026-04-09 02:05:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:22.303161 | orchestrator | 2026-04-09 02:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:25.355314 | orchestrator | 2026-04-09 02:05:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:25.357904 | orchestrator | 2026-04-09 02:05:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:25.357975 | orchestrator | 2026-04-09 02:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:28.407136 | orchestrator | 2026-04-09 02:05:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:28.408012 | orchestrator | 2026-04-09 02:05:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:28.408042 | orchestrator | 2026-04-09 02:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:31.456404 | orchestrator | 2026-04-09 02:05:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:31.457680 | orchestrator | 2026-04-09 02:05:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:31.457895 | orchestrator | 2026-04-09 02:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:34.508406 | orchestrator | 2026-04-09 02:05:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:34.511550 | orchestrator | 2026-04-09 02:05:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:34.511717 | orchestrator | 2026-04-09 02:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:37.557121 | orchestrator | 2026-04-09 02:05:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:37.560179 | orchestrator | 2026-04-09 02:05:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:37.560255 | orchestrator | 2026-04-09 02:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:40.606195 | orchestrator | 2026-04-09 02:05:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:40.608188 | orchestrator | 2026-04-09 02:05:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:40.608325 | orchestrator | 2026-04-09 02:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:43.655188 | orchestrator | 2026-04-09 02:05:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:43.656777 | orchestrator | 2026-04-09 02:05:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:43.656912 | orchestrator | 2026-04-09 02:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:46.707155 | orchestrator | 2026-04-09 02:05:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:46.708698 | orchestrator | 2026-04-09 02:05:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:46.708891 | orchestrator | 2026-04-09 02:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:49.753481 | orchestrator | 2026-04-09 02:05:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:49.755617 | orchestrator | 2026-04-09 02:05:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:49.755707 | orchestrator | 2026-04-09 02:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:52.805462 | orchestrator | 2026-04-09 02:05:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:52.808718 | orchestrator | 2026-04-09 02:05:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:52.809011 | orchestrator | 2026-04-09 02:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:55.859463 | orchestrator | 2026-04-09 02:05:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:55.862348 | orchestrator | 2026-04-09 02:05:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:55.862447 | orchestrator | 2026-04-09 02:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:05:58.905486 | orchestrator | 2026-04-09 02:05:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:05:58.907693 | orchestrator | 2026-04-09 02:05:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:05:58.907759 | orchestrator | 2026-04-09 02:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:01.952125 | orchestrator | 2026-04-09 02:06:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:01.953832 | orchestrator | 2026-04-09 02:06:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:01.953899 | orchestrator | 2026-04-09 02:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:04.997134 | orchestrator | 2026-04-09 02:06:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:04.998948 | orchestrator | 2026-04-09 02:06:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:04.999002 | orchestrator | 2026-04-09 02:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:08.041812 | orchestrator | 2026-04-09 02:06:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:08.043266 | orchestrator | 2026-04-09 02:06:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:08.044727 | orchestrator | 2026-04-09 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:11.088704 | orchestrator | 2026-04-09 02:06:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:11.091462 | orchestrator | 2026-04-09 02:06:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:11.091506 | orchestrator | 2026-04-09 02:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:14.140883 | orchestrator | 2026-04-09 02:06:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:14.142827 | orchestrator | 2026-04-09 02:06:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:14.142959 | orchestrator | 2026-04-09 02:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:17.192085 | orchestrator | 2026-04-09 02:06:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:17.194599 | orchestrator | 2026-04-09 02:06:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:17.194662 | orchestrator | 2026-04-09 02:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:20.250359 | orchestrator | 2026-04-09 02:06:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:20.251556 | orchestrator | 2026-04-09 02:06:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:20.251787 | orchestrator | 2026-04-09 02:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:23.312923 | orchestrator | 2026-04-09 02:06:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:23.314324 | orchestrator | 2026-04-09 02:06:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:23.314427 | orchestrator | 2026-04-09 02:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:26.353814 | orchestrator | 2026-04-09 02:06:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:26.354549 | orchestrator | 2026-04-09 02:06:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:26.354576 | orchestrator | 2026-04-09 02:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:29.390355 | orchestrator | 2026-04-09 02:06:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:29.391419 | orchestrator | 2026-04-09 02:06:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:29.391471 | orchestrator | 2026-04-09 02:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:32.436177 | orchestrator | 2026-04-09 02:06:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:32.437047 | orchestrator | 2026-04-09 02:06:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:32.437434 | orchestrator | 2026-04-09 02:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:35.486404 | orchestrator | 2026-04-09 02:06:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:35.489025 | orchestrator | 2026-04-09 02:06:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:35.489143 | orchestrator | 2026-04-09 02:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:38.536794 | orchestrator | 2026-04-09 02:06:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:38.540312 | orchestrator | 2026-04-09 02:06:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:38.540392 | orchestrator | 2026-04-09 02:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:41.585623 | orchestrator | 2026-04-09 02:06:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:41.587814 | orchestrator | 2026-04-09 02:06:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:41.587931 | orchestrator | 2026-04-09 02:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:44.632106 | orchestrator | 2026-04-09 02:06:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:44.633221 | orchestrator | 2026-04-09 02:06:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:44.633268 | orchestrator | 2026-04-09 02:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:47.676492 | orchestrator | 2026-04-09 02:06:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:47.679826 | orchestrator | 2026-04-09 02:06:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:47.679918 | orchestrator | 2026-04-09 02:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:50.727335 | orchestrator | 2026-04-09 02:06:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:50.731177 | orchestrator | 2026-04-09 02:06:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:50.731236 | orchestrator | 2026-04-09 02:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:53.781719 | orchestrator | 2026-04-09 02:06:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:53.784603 | orchestrator | 2026-04-09 02:06:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:53.784657 | orchestrator | 2026-04-09 02:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:56.833452 | orchestrator | 2026-04-09 02:06:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:56.836065 | orchestrator | 2026-04-09 02:06:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:56.836168 | orchestrator | 2026-04-09 02:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:06:59.889014 | orchestrator | 2026-04-09 02:06:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:06:59.889511 | orchestrator | 2026-04-09 02:06:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:06:59.889534 | orchestrator | 2026-04-09 02:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:02.933181 | orchestrator | 2026-04-09 02:07:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:02.934431 | orchestrator | 2026-04-09 02:07:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:02.934525 | orchestrator | 2026-04-09 02:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:05.980947 | orchestrator | 2026-04-09 02:07:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:05.982674 | orchestrator | 2026-04-09 02:07:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:05.982716 | orchestrator | 2026-04-09 02:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:09.040066 | orchestrator | 2026-04-09 02:07:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:09.041858 | orchestrator | 2026-04-09 02:07:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:09.041955 | orchestrator | 2026-04-09 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:12.090396 | orchestrator | 2026-04-09 02:07:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:12.091788 | orchestrator | 2026-04-09 02:07:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:12.092291 | orchestrator | 2026-04-09 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:15.143786 | orchestrator | 2026-04-09 02:07:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:15.146328 | orchestrator | 2026-04-09 02:07:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:15.146517 | orchestrator | 2026-04-09 02:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:18.193333 | orchestrator | 2026-04-09 02:07:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:18.196110 | orchestrator | 2026-04-09 02:07:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:18.196190 | orchestrator | 2026-04-09 02:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:21.243377 | orchestrator | 2026-04-09 02:07:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:21.245688 | orchestrator | 2026-04-09 02:07:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:21.245750 | orchestrator | 2026-04-09 02:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:24.291430 | orchestrator | 2026-04-09 02:07:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:24.293313 | orchestrator | 2026-04-09 02:07:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:24.293384 | orchestrator | 2026-04-09 02:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:27.335291 | orchestrator | 2026-04-09 02:07:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:27.339007 | orchestrator | 2026-04-09 02:07:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:27.339096 | orchestrator | 2026-04-09 02:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:30.385721 | orchestrator | 2026-04-09 02:07:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:30.387634 | orchestrator | 2026-04-09 02:07:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:30.387689 | orchestrator | 2026-04-09 02:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:33.430123 | orchestrator | 2026-04-09 02:07:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:33.431233 | orchestrator | 2026-04-09 02:07:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:33.431312 | orchestrator | 2026-04-09 02:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:36.466589 | orchestrator | 2026-04-09 02:07:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:36.467582 | orchestrator | 2026-04-09 02:07:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:36.467615 | orchestrator | 2026-04-09 02:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:39.519251 | orchestrator | 2026-04-09 02:07:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:39.521457 | orchestrator | 2026-04-09 02:07:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:39.521630 | orchestrator | 2026-04-09 02:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:42.563738 | orchestrator | 2026-04-09 02:07:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:42.565766 | orchestrator | 2026-04-09 02:07:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:42.565854 | orchestrator | 2026-04-09 02:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:45.613773 | orchestrator | 2026-04-09 02:07:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:45.615357 | orchestrator | 2026-04-09 02:07:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:45.615411 | orchestrator | 2026-04-09 02:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:48.659633 | orchestrator | 2026-04-09 02:07:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:48.661656 | orchestrator | 2026-04-09 02:07:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:48.661716 | orchestrator | 2026-04-09 02:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:51.705267 | orchestrator | 2026-04-09 02:07:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:51.707768 | orchestrator | 2026-04-09 02:07:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:51.707861 | orchestrator | 2026-04-09 02:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:54.753752 | orchestrator | 2026-04-09 02:07:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:54.755175 | orchestrator | 2026-04-09 02:07:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:54.755228 | orchestrator | 2026-04-09 02:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:07:57.795668 | orchestrator | 2026-04-09 02:07:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:07:57.796694 | orchestrator | 2026-04-09 02:07:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:07:57.796728 | orchestrator | 2026-04-09 02:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:00.844038 | orchestrator | 2026-04-09 02:08:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:00.846328 | orchestrator | 2026-04-09 02:08:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:00.846399 | orchestrator | 2026-04-09 02:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:03.889632 | orchestrator | 2026-04-09 02:08:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:03.891854 | orchestrator | 2026-04-09 02:08:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:03.892530 | orchestrator | 2026-04-09 02:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:06.931447 | orchestrator | 2026-04-09 02:08:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:06.932846 | orchestrator | 2026-04-09 02:08:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:06.932998 | orchestrator | 2026-04-09 02:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:09.974272 | orchestrator | 2026-04-09 02:08:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:09.976281 | orchestrator | 2026-04-09 02:08:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:09.976337 | orchestrator | 2026-04-09 02:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:13.018249 | orchestrator | 2026-04-09 02:08:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:13.018548 | orchestrator | 2026-04-09 02:08:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:13.018563 | orchestrator | 2026-04-09 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:16.067043 | orchestrator | 2026-04-09 02:08:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:16.067244 | orchestrator | 2026-04-09 02:08:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:16.067267 | orchestrator | 2026-04-09 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:19.113635 | orchestrator | 2026-04-09 02:08:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:19.115984 | orchestrator | 2026-04-09 02:08:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:19.116057 | orchestrator | 2026-04-09 02:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:22.163571 | orchestrator | 2026-04-09 02:08:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:22.165265 | orchestrator | 2026-04-09 02:08:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:22.165343 | orchestrator | 2026-04-09 02:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:25.215431 | orchestrator | 2026-04-09 02:08:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:25.217575 | orchestrator | 2026-04-09 02:08:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:25.217658 | orchestrator | 2026-04-09 02:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:28.262341 | orchestrator | 2026-04-09 02:08:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:28.263971 | orchestrator | 2026-04-09 02:08:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:28.264074 | orchestrator | 2026-04-09 02:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:31.313529 | orchestrator | 2026-04-09 02:08:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:31.316340 | orchestrator | 2026-04-09 02:08:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:31.316450 | orchestrator | 2026-04-09 02:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:34.356065 | orchestrator | 2026-04-09 02:08:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:34.357192 | orchestrator | 2026-04-09 02:08:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:34.357236 | orchestrator | 2026-04-09 02:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:37.406414 | orchestrator | 2026-04-09 02:08:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:37.409630 | orchestrator | 2026-04-09 02:08:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:37.409690 | orchestrator | 2026-04-09 02:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:40.468138 | orchestrator | 2026-04-09 02:08:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:40.468754 | orchestrator | 2026-04-09 02:08:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:40.468851 | orchestrator | 2026-04-09 02:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:43.516965 | orchestrator | 2026-04-09 02:08:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:43.520593 | orchestrator | 2026-04-09 02:08:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:43.521091 | orchestrator | 2026-04-09 02:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:46.574908 | orchestrator | 2026-04-09 02:08:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:46.577840 | orchestrator | 2026-04-09 02:08:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:46.577998 | orchestrator | 2026-04-09 02:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:49.624513 | orchestrator | 2026-04-09 02:08:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:49.627411 | orchestrator | 2026-04-09 02:08:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:49.627497 | orchestrator | 2026-04-09 02:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:52.672871 | orchestrator | 2026-04-09 02:08:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:52.675049 | orchestrator | 2026-04-09 02:08:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:52.675098 | orchestrator | 2026-04-09 02:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:55.719796 | orchestrator | 2026-04-09 02:08:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:55.720738 | orchestrator | 2026-04-09 02:08:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:55.720790 | orchestrator | 2026-04-09 02:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:08:58.755714 | orchestrator | 2026-04-09 02:08:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:08:58.757158 | orchestrator | 2026-04-09 02:08:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:08:58.757304 | orchestrator | 2026-04-09 02:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:01.804387 | orchestrator | 2026-04-09 02:09:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:01.807229 | orchestrator | 2026-04-09 02:09:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:01.807278 | orchestrator | 2026-04-09 02:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:04.854624 | orchestrator | 2026-04-09 02:09:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:04.855991 | orchestrator | 2026-04-09 02:09:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:04.856051 | orchestrator | 2026-04-09 02:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:07.905916 | orchestrator | 2026-04-09 02:09:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:07.907538 | orchestrator | 2026-04-09 02:09:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:07.907670 | orchestrator | 2026-04-09 02:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:10.951735 | orchestrator | 2026-04-09 02:09:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:10.953797 | orchestrator | 2026-04-09 02:09:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:10.953892 | orchestrator | 2026-04-09 02:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:13.997751 | orchestrator | 2026-04-09 02:09:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:13.999743 | orchestrator | 2026-04-09 02:09:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:13.999797 | orchestrator | 2026-04-09 02:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:17.048411 | orchestrator | 2026-04-09 02:09:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:17.049724 | orchestrator | 2026-04-09 02:09:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:17.049763 | orchestrator | 2026-04-09 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:20.094466 | orchestrator | 2026-04-09 02:09:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:20.097069 | orchestrator | 2026-04-09 02:09:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:20.097192 | orchestrator | 2026-04-09 02:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:23.136186 | orchestrator | 2026-04-09 02:09:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:23.137599 | orchestrator | 2026-04-09 02:09:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:23.137651 | orchestrator | 2026-04-09 02:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:26.180453 | orchestrator | 2026-04-09 02:09:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:26.181698 | orchestrator | 2026-04-09 02:09:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:26.181780 | orchestrator | 2026-04-09 02:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:29.223494 | orchestrator | 2026-04-09 02:09:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:29.225624 | orchestrator | 2026-04-09 02:09:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:29.225721 | orchestrator | 2026-04-09 02:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:32.266505 | orchestrator | 2026-04-09 02:09:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:32.268279 | orchestrator | 2026-04-09 02:09:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:32.268333 | orchestrator | 2026-04-09 02:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:35.313722 | orchestrator | 2026-04-09 02:09:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:35.317447 | orchestrator | 2026-04-09 02:09:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:35.317519 | orchestrator | 2026-04-09 02:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:38.353777 | orchestrator | 2026-04-09 02:09:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:38.354217 | orchestrator | 2026-04-09 02:09:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:38.354265 | orchestrator | 2026-04-09 02:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:41.390341 | orchestrator | 2026-04-09 02:09:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:41.392192 | orchestrator | 2026-04-09 02:09:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:41.392229 | orchestrator | 2026-04-09 02:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:44.434303 | orchestrator | 2026-04-09 02:09:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:44.436616 | orchestrator | 2026-04-09 02:09:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:44.436696 | orchestrator | 2026-04-09 02:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:47.480381 | orchestrator | 2026-04-09 02:09:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:47.481223 | orchestrator | 2026-04-09 02:09:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:47.481258 | orchestrator | 2026-04-09 02:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:50.524321 | orchestrator | 2026-04-09 02:09:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:50.525894 | orchestrator | 2026-04-09 02:09:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:50.525999 | orchestrator | 2026-04-09 02:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:53.565599 | orchestrator | 2026-04-09 02:09:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:53.567219 | orchestrator | 2026-04-09 02:09:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:53.567265 | orchestrator | 2026-04-09 02:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:56.612200 | orchestrator | 2026-04-09 02:09:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:56.612451 | orchestrator | 2026-04-09 02:09:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:56.612481 | orchestrator | 2026-04-09 02:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:09:59.659193 | orchestrator | 2026-04-09 02:09:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:09:59.661052 | orchestrator | 2026-04-09 02:09:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:09:59.661106 | orchestrator | 2026-04-09 02:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:02.704819 | orchestrator | 2026-04-09 02:10:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:02.707667 | orchestrator | 2026-04-09 02:10:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:02.707833 | orchestrator | 2026-04-09 02:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:05.749853 | orchestrator | 2026-04-09 02:10:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:05.751383 | orchestrator | 2026-04-09 02:10:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:05.751436 | orchestrator | 2026-04-09 02:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:08.796357 | orchestrator | 2026-04-09 02:10:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:08.798212 | orchestrator | 2026-04-09 02:10:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:08.798291 | orchestrator | 2026-04-09 02:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:11.840107 | orchestrator | 2026-04-09 02:10:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:11.841853 | orchestrator | 2026-04-09 02:10:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:11.841893 | orchestrator | 2026-04-09 02:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:14.889374 | orchestrator | 2026-04-09 02:10:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:14.891612 | orchestrator | 2026-04-09 02:10:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:14.891662 | orchestrator | 2026-04-09 02:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:17.933736 | orchestrator | 2026-04-09 02:10:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:17.935391 | orchestrator | 2026-04-09 02:10:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:17.935435 | orchestrator | 2026-04-09 02:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:20.992446 | orchestrator | 2026-04-09 02:10:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:20.993785 | orchestrator | 2026-04-09 02:10:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:20.993912 | orchestrator | 2026-04-09 02:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:24.048173 | orchestrator | 2026-04-09 02:10:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:24.050209 | orchestrator | 2026-04-09 02:10:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:24.050259 | orchestrator | 2026-04-09 02:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:27.098762 | orchestrator | 2026-04-09 02:10:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:27.100902 | orchestrator | 2026-04-09 02:10:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:27.100952 | orchestrator | 2026-04-09 02:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:30.152355 | orchestrator | 2026-04-09 02:10:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:30.153866 | orchestrator | 2026-04-09 02:10:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:30.154130 | orchestrator | 2026-04-09 02:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:33.205317 | orchestrator | 2026-04-09 02:10:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:33.208622 | orchestrator | 2026-04-09 02:10:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:33.208689 | orchestrator | 2026-04-09 02:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:36.261501 | orchestrator | 2026-04-09 02:10:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:36.262639 | orchestrator | 2026-04-09 02:10:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:36.262705 | orchestrator | 2026-04-09 02:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:39.310605 | orchestrator | 2026-04-09 02:10:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:39.311517 | orchestrator | 2026-04-09 02:10:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:39.311576 | orchestrator | 2026-04-09 02:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:42.353885 | orchestrator | 2026-04-09 02:10:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:42.355351 | orchestrator | 2026-04-09 02:10:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:42.355395 | orchestrator | 2026-04-09 02:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:45.408896 | orchestrator | 2026-04-09 02:10:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:45.409198 | orchestrator | 2026-04-09 02:10:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:45.409401 | orchestrator | 2026-04-09 02:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:48.461047 | orchestrator | 2026-04-09 02:10:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:48.463271 | orchestrator | 2026-04-09 02:10:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:48.463660 | orchestrator | 2026-04-09 02:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:51.500257 | orchestrator | 2026-04-09 02:10:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:51.501923 | orchestrator | 2026-04-09 02:10:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:51.502120 | orchestrator | 2026-04-09 02:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:54.544522 | orchestrator | 2026-04-09 02:10:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:54.545743 | orchestrator | 2026-04-09 02:10:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:54.545793 | orchestrator | 2026-04-09 02:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:10:57.591124 | orchestrator | 2026-04-09 02:10:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:10:57.592895 | orchestrator | 2026-04-09 02:10:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:10:57.592948 | orchestrator | 2026-04-09 02:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:00.644265 | orchestrator | 2026-04-09 02:11:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:00.645479 | orchestrator | 2026-04-09 02:11:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:00.646193 | orchestrator | 2026-04-09 02:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:03.697084 | orchestrator | 2026-04-09 02:11:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:03.698511 | orchestrator | 2026-04-09 02:11:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:03.698561 | orchestrator | 2026-04-09 02:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:06.742224 | orchestrator | 2026-04-09 02:11:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:06.743901 | orchestrator | 2026-04-09 02:11:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:06.743945 | orchestrator | 2026-04-09 02:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:09.789924 | orchestrator | 2026-04-09 02:11:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:09.791347 | orchestrator | 2026-04-09 02:11:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:09.791416 | orchestrator | 2026-04-09 02:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:12.834488 | orchestrator | 2026-04-09 02:11:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:12.835614 | orchestrator | 2026-04-09 02:11:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:12.835701 | orchestrator | 2026-04-09 02:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:15.885665 | orchestrator | 2026-04-09 02:11:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:15.887218 | orchestrator | 2026-04-09 02:11:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:15.887417 | orchestrator | 2026-04-09 02:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:18.932257 | orchestrator | 2026-04-09 02:11:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:18.933897 | orchestrator | 2026-04-09 02:11:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:18.933951 | orchestrator | 2026-04-09 02:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:21.983305 | orchestrator | 2026-04-09 02:11:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:21.985446 | orchestrator | 2026-04-09 02:11:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:21.985538 | orchestrator | 2026-04-09 02:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:25.034580 | orchestrator | 2026-04-09 02:11:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:25.035527 | orchestrator | 2026-04-09 02:11:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:25.035601 | orchestrator | 2026-04-09 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:28.081809 | orchestrator | 2026-04-09 02:11:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:28.082983 | orchestrator | 2026-04-09 02:11:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:28.083087 | orchestrator | 2026-04-09 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:31.129126 | orchestrator | 2026-04-09 02:11:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:31.131191 | orchestrator | 2026-04-09 02:11:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:31.131249 | orchestrator | 2026-04-09 02:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:34.173247 | orchestrator | 2026-04-09 02:11:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:34.174637 | orchestrator | 2026-04-09 02:11:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:34.174740 | orchestrator | 2026-04-09 02:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:37.225951 | orchestrator | 2026-04-09 02:11:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:37.228567 | orchestrator | 2026-04-09 02:11:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:37.228653 | orchestrator | 2026-04-09 02:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:40.274225 | orchestrator | 2026-04-09 02:11:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:40.275715 | orchestrator | 2026-04-09 02:11:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:40.275780 | orchestrator | 2026-04-09 02:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:43.320821 | orchestrator | 2026-04-09 02:11:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:43.323341 | orchestrator | 2026-04-09 02:11:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:43.323379 | orchestrator | 2026-04-09 02:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:46.367817 | orchestrator | 2026-04-09 02:11:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:46.368967 | orchestrator | 2026-04-09 02:11:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:46.369207 | orchestrator | 2026-04-09 02:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:49.414420 | orchestrator | 2026-04-09 02:11:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:49.416486 | orchestrator | 2026-04-09 02:11:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:49.416569 | orchestrator | 2026-04-09 02:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:52.467447 | orchestrator | 2026-04-09 02:11:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:52.469243 | orchestrator | 2026-04-09 02:11:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:52.469323 | orchestrator | 2026-04-09 02:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:55.521171 | orchestrator | 2026-04-09 02:11:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:55.523954 | orchestrator | 2026-04-09 02:11:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:55.524070 | orchestrator | 2026-04-09 02:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:11:58.569269 | orchestrator | 2026-04-09 02:11:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:11:58.571431 | orchestrator | 2026-04-09 02:11:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:11:58.571513 | orchestrator | 2026-04-09 02:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:01.624479 | orchestrator | 2026-04-09 02:12:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:01.628515 | orchestrator | 2026-04-09 02:12:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:01.628564 | orchestrator | 2026-04-09 02:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:04.675197 | orchestrator | 2026-04-09 02:12:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:04.676759 | orchestrator | 2026-04-09 02:12:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:04.676843 | orchestrator | 2026-04-09 02:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:07.734241 | orchestrator | 2026-04-09 02:12:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:07.735040 | orchestrator | 2026-04-09 02:12:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:07.735079 | orchestrator | 2026-04-09 02:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:10.782526 | orchestrator | 2026-04-09 02:12:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:10.785135 | orchestrator | 2026-04-09 02:12:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:10.785177 | orchestrator | 2026-04-09 02:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:13.838260 | orchestrator | 2026-04-09 02:12:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:13.839577 | orchestrator | 2026-04-09 02:12:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:13.839812 | orchestrator | 2026-04-09 02:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:16.889308 | orchestrator | 2026-04-09 02:12:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:16.892405 | orchestrator | 2026-04-09 02:12:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:16.892522 | orchestrator | 2026-04-09 02:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:19.936107 | orchestrator | 2026-04-09 02:12:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:19.938619 | orchestrator | 2026-04-09 02:12:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:19.938683 | orchestrator | 2026-04-09 02:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:22.984260 | orchestrator | 2026-04-09 02:12:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:22.985801 | orchestrator | 2026-04-09 02:12:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:22.985840 | orchestrator | 2026-04-09 02:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:26.026310 | orchestrator | 2026-04-09 02:12:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:26.027687 | orchestrator | 2026-04-09 02:12:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:26.027957 | orchestrator | 2026-04-09 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:29.072113 | orchestrator | 2026-04-09 02:12:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:29.073451 | orchestrator | 2026-04-09 02:12:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:29.073505 | orchestrator | 2026-04-09 02:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:32.119839 | orchestrator | 2026-04-09 02:12:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:32.121734 | orchestrator | 2026-04-09 02:12:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:32.121768 | orchestrator | 2026-04-09 02:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:35.166708 | orchestrator | 2026-04-09 02:12:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:35.168800 | orchestrator | 2026-04-09 02:12:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:35.168997 | orchestrator | 2026-04-09 02:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:38.213846 | orchestrator | 2026-04-09 02:12:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:38.214292 | orchestrator | 2026-04-09 02:12:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:38.214397 | orchestrator | 2026-04-09 02:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:41.259365 | orchestrator | 2026-04-09 02:12:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:41.260677 | orchestrator | 2026-04-09 02:12:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:41.260773 | orchestrator | 2026-04-09 02:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:44.308434 | orchestrator | 2026-04-09 02:12:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:44.310570 | orchestrator | 2026-04-09 02:12:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:44.310660 | orchestrator | 2026-04-09 02:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:47.360215 | orchestrator | 2026-04-09 02:12:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:47.361741 | orchestrator | 2026-04-09 02:12:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:47.361788 | orchestrator | 2026-04-09 02:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:50.408471 | orchestrator | 2026-04-09 02:12:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:50.408544 | orchestrator | 2026-04-09 02:12:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:50.408552 | orchestrator | 2026-04-09 02:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:53.457164 | orchestrator | 2026-04-09 02:12:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:53.459974 | orchestrator | 2026-04-09 02:12:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:53.460070 | orchestrator | 2026-04-09 02:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:56.504533 | orchestrator | 2026-04-09 02:12:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:56.505824 | orchestrator | 2026-04-09 02:12:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:56.505940 | orchestrator | 2026-04-09 02:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:12:59.551875 | orchestrator | 2026-04-09 02:12:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:12:59.554697 | orchestrator | 2026-04-09 02:12:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:12:59.554745 | orchestrator | 2026-04-09 02:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:02.600340 | orchestrator | 2026-04-09 02:13:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:02.601681 | orchestrator | 2026-04-09 02:13:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:02.601723 | orchestrator | 2026-04-09 02:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:05.651953 | orchestrator | 2026-04-09 02:13:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:05.654344 | orchestrator | 2026-04-09 02:13:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:05.654395 | orchestrator | 2026-04-09 02:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:08.703396 | orchestrator | 2026-04-09 02:13:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:08.705249 | orchestrator | 2026-04-09 02:13:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:08.705421 | orchestrator | 2026-04-09 02:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:11.754853 | orchestrator | 2026-04-09 02:13:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:11.756286 | orchestrator | 2026-04-09 02:13:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:11.756373 | orchestrator | 2026-04-09 02:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:14.799633 | orchestrator | 2026-04-09 02:13:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:14.801171 | orchestrator | 2026-04-09 02:13:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:14.801242 | orchestrator | 2026-04-09 02:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:17.847851 | orchestrator | 2026-04-09 02:13:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:17.849014 | orchestrator | 2026-04-09 02:13:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:17.849288 | orchestrator | 2026-04-09 02:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:20.887807 | orchestrator | 2026-04-09 02:13:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:20.888122 | orchestrator | 2026-04-09 02:13:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:20.888152 | orchestrator | 2026-04-09 02:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:23.931404 | orchestrator | 2026-04-09 02:13:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:23.933412 | orchestrator | 2026-04-09 02:13:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:23.933455 | orchestrator | 2026-04-09 02:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:26.979423 | orchestrator | 2026-04-09 02:13:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:26.980702 | orchestrator | 2026-04-09 02:13:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:26.980755 | orchestrator | 2026-04-09 02:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:30.025888 | orchestrator | 2026-04-09 02:13:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:30.027207 | orchestrator | 2026-04-09 02:13:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:30.027258 | orchestrator | 2026-04-09 02:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:33.075538 | orchestrator | 2026-04-09 02:13:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:33.077433 | orchestrator | 2026-04-09 02:13:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:33.077504 | orchestrator | 2026-04-09 02:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:36.120263 | orchestrator | 2026-04-09 02:13:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:36.121490 | orchestrator | 2026-04-09 02:13:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:36.121546 | orchestrator | 2026-04-09 02:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:39.162261 | orchestrator | 2026-04-09 02:13:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:39.165516 | orchestrator | 2026-04-09 02:13:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:39.165956 | orchestrator | 2026-04-09 02:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:42.211932 | orchestrator | 2026-04-09 02:13:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:42.212299 | orchestrator | 2026-04-09 02:13:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:42.212336 | orchestrator | 2026-04-09 02:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:45.253105 | orchestrator | 2026-04-09 02:13:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:45.255844 | orchestrator | 2026-04-09 02:13:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:45.256123 | orchestrator | 2026-04-09 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:48.298359 | orchestrator | 2026-04-09 02:13:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:48.299672 | orchestrator | 2026-04-09 02:13:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:48.299713 | orchestrator | 2026-04-09 02:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:51.350632 | orchestrator | 2026-04-09 02:13:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:51.352924 | orchestrator | 2026-04-09 02:13:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:51.352968 | orchestrator | 2026-04-09 02:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:54.408122 | orchestrator | 2026-04-09 02:13:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:54.410259 | orchestrator | 2026-04-09 02:13:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:54.410357 | orchestrator | 2026-04-09 02:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:13:57.456386 | orchestrator | 2026-04-09 02:13:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:13:57.458577 | orchestrator | 2026-04-09 02:13:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:13:57.458641 | orchestrator | 2026-04-09 02:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:00.512936 | orchestrator | 2026-04-09 02:14:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:00.515205 | orchestrator | 2026-04-09 02:14:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:00.515287 | orchestrator | 2026-04-09 02:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:03.566319 | orchestrator | 2026-04-09 02:14:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:03.567416 | orchestrator | 2026-04-09 02:14:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:03.567589 | orchestrator | 2026-04-09 02:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:06.614612 | orchestrator | 2026-04-09 02:14:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:06.616169 | orchestrator | 2026-04-09 02:14:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:06.616282 | orchestrator | 2026-04-09 02:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:09.665628 | orchestrator | 2026-04-09 02:14:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:09.667192 | orchestrator | 2026-04-09 02:14:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:09.667276 | orchestrator | 2026-04-09 02:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:12.711511 | orchestrator | 2026-04-09 02:14:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:12.716751 | orchestrator | 2026-04-09 02:14:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:12.716818 | orchestrator | 2026-04-09 02:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:15.761576 | orchestrator | 2026-04-09 02:14:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:15.763438 | orchestrator | 2026-04-09 02:14:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:15.763559 | orchestrator | 2026-04-09 02:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:18.809932 | orchestrator | 2026-04-09 02:14:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:18.810998 | orchestrator | 2026-04-09 02:14:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:18.811067 | orchestrator | 2026-04-09 02:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:21.857283 | orchestrator | 2026-04-09 02:14:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:21.858878 | orchestrator | 2026-04-09 02:14:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:21.858932 | orchestrator | 2026-04-09 02:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:24.895113 | orchestrator | 2026-04-09 02:14:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:24.897328 | orchestrator | 2026-04-09 02:14:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:24.897554 | orchestrator | 2026-04-09 02:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:27.947109 | orchestrator | 2026-04-09 02:14:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:27.947755 | orchestrator | 2026-04-09 02:14:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:27.947793 | orchestrator | 2026-04-09 02:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:30.991291 | orchestrator | 2026-04-09 02:14:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:30.992988 | orchestrator | 2026-04-09 02:14:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:30.993089 | orchestrator | 2026-04-09 02:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:34.044094 | orchestrator | 2026-04-09 02:14:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:34.045644 | orchestrator | 2026-04-09 02:14:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:34.045703 | orchestrator | 2026-04-09 02:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:37.090922 | orchestrator | 2026-04-09 02:14:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:37.092403 | orchestrator | 2026-04-09 02:14:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:37.092466 | orchestrator | 2026-04-09 02:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:40.135561 | orchestrator | 2026-04-09 02:14:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:40.137662 | orchestrator | 2026-04-09 02:14:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:40.137745 | orchestrator | 2026-04-09 02:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:43.184963 | orchestrator | 2026-04-09 02:14:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:43.186607 | orchestrator | 2026-04-09 02:14:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:43.186664 | orchestrator | 2026-04-09 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:46.231883 | orchestrator | 2026-04-09 02:14:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:46.232985 | orchestrator | 2026-04-09 02:14:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:46.233019 | orchestrator | 2026-04-09 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:49.279543 | orchestrator | 2026-04-09 02:14:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:49.280532 | orchestrator | 2026-04-09 02:14:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:49.280654 | orchestrator | 2026-04-09 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:52.328215 | orchestrator | 2026-04-09 02:14:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:52.331432 | orchestrator | 2026-04-09 02:14:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:52.331509 | orchestrator | 2026-04-09 02:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:55.374898 | orchestrator | 2026-04-09 02:14:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:55.376003 | orchestrator | 2026-04-09 02:14:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:55.376294 | orchestrator | 2026-04-09 02:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:14:58.427877 | orchestrator | 2026-04-09 02:14:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:14:58.429570 | orchestrator | 2026-04-09 02:14:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:14:58.429649 | orchestrator | 2026-04-09 02:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:01.468572 | orchestrator | 2026-04-09 02:15:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:01.472146 | orchestrator | 2026-04-09 02:15:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:01.472199 | orchestrator | 2026-04-09 02:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:04.521697 | orchestrator | 2026-04-09 02:15:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:04.526325 | orchestrator | 2026-04-09 02:15:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:04.526409 | orchestrator | 2026-04-09 02:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:07.580941 | orchestrator | 2026-04-09 02:15:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:07.583791 | orchestrator | 2026-04-09 02:15:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:07.584379 | orchestrator | 2026-04-09 02:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:10.632470 | orchestrator | 2026-04-09 02:15:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:10.633766 | orchestrator | 2026-04-09 02:15:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:10.633829 | orchestrator | 2026-04-09 02:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:13.683858 | orchestrator | 2026-04-09 02:15:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:13.685464 | orchestrator | 2026-04-09 02:15:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:13.685500 | orchestrator | 2026-04-09 02:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:16.734973 | orchestrator | 2026-04-09 02:15:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:16.736479 | orchestrator | 2026-04-09 02:15:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:16.736537 | orchestrator | 2026-04-09 02:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:19.784609 | orchestrator | 2026-04-09 02:15:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:19.786681 | orchestrator | 2026-04-09 02:15:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:19.786740 | orchestrator | 2026-04-09 02:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:22.845160 | orchestrator | 2026-04-09 02:15:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:22.847935 | orchestrator | 2026-04-09 02:15:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:22.848492 | orchestrator | 2026-04-09 02:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:25.892734 | orchestrator | 2026-04-09 02:15:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:25.895394 | orchestrator | 2026-04-09 02:15:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:25.895451 | orchestrator | 2026-04-09 02:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:28.944984 | orchestrator | 2026-04-09 02:15:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:28.946255 | orchestrator | 2026-04-09 02:15:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:28.946307 | orchestrator | 2026-04-09 02:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:32.000449 | orchestrator | 2026-04-09 02:15:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:32.004306 | orchestrator | 2026-04-09 02:15:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:32.004367 | orchestrator | 2026-04-09 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:35.053425 | orchestrator | 2026-04-09 02:15:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:35.056773 | orchestrator | 2026-04-09 02:15:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:35.056848 | orchestrator | 2026-04-09 02:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:38.106423 | orchestrator | 2026-04-09 02:15:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:38.107862 | orchestrator | 2026-04-09 02:15:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:38.108006 | orchestrator | 2026-04-09 02:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:41.152227 | orchestrator | 2026-04-09 02:15:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:41.154963 | orchestrator | 2026-04-09 02:15:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:41.155015 | orchestrator | 2026-04-09 02:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:44.200872 | orchestrator | 2026-04-09 02:15:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:44.203646 | orchestrator | 2026-04-09 02:15:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:44.203718 | orchestrator | 2026-04-09 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:47.242565 | orchestrator | 2026-04-09 02:15:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:47.244521 | orchestrator | 2026-04-09 02:15:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:47.244720 | orchestrator | 2026-04-09 02:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:50.291114 | orchestrator | 2026-04-09 02:15:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:50.291974 | orchestrator | 2026-04-09 02:15:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:50.292004 | orchestrator | 2026-04-09 02:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:53.341164 | orchestrator | 2026-04-09 02:15:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:53.342727 | orchestrator | 2026-04-09 02:15:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:53.342829 | orchestrator | 2026-04-09 02:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:56.384707 | orchestrator | 2026-04-09 02:15:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:56.386493 | orchestrator | 2026-04-09 02:15:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:56.386551 | orchestrator | 2026-04-09 02:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:15:59.430295 | orchestrator | 2026-04-09 02:15:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:15:59.431780 | orchestrator | 2026-04-09 02:15:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:15:59.431815 | orchestrator | 2026-04-09 02:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:02.481953 | orchestrator | 2026-04-09 02:16:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:02.483476 | orchestrator | 2026-04-09 02:16:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:02.483533 | orchestrator | 2026-04-09 02:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:05.531977 | orchestrator | 2026-04-09 02:16:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:05.534601 | orchestrator | 2026-04-09 02:16:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:05.534689 | orchestrator | 2026-04-09 02:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:08.590249 | orchestrator | 2026-04-09 02:16:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:08.593646 | orchestrator | 2026-04-09 02:16:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:08.593741 | orchestrator | 2026-04-09 02:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:11.637712 | orchestrator | 2026-04-09 02:16:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:11.639244 | orchestrator | 2026-04-09 02:16:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:11.639392 | orchestrator | 2026-04-09 02:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:14.683273 | orchestrator | 2026-04-09 02:16:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:14.685459 | orchestrator | 2026-04-09 02:16:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:14.685537 | orchestrator | 2026-04-09 02:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:17.730463 | orchestrator | 2026-04-09 02:16:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:17.731574 | orchestrator | 2026-04-09 02:16:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:17.731606 | orchestrator | 2026-04-09 02:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:20.773052 | orchestrator | 2026-04-09 02:16:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:20.773120 | orchestrator | 2026-04-09 02:16:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:20.773126 | orchestrator | 2026-04-09 02:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:23.819127 | orchestrator | 2026-04-09 02:16:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:23.821259 | orchestrator | 2026-04-09 02:16:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:23.821302 | orchestrator | 2026-04-09 02:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:26.870731 | orchestrator | 2026-04-09 02:16:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:26.873078 | orchestrator | 2026-04-09 02:16:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:26.873235 | orchestrator | 2026-04-09 02:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:29.918172 | orchestrator | 2026-04-09 02:16:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:29.921218 | orchestrator | 2026-04-09 02:16:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:29.921280 | orchestrator | 2026-04-09 02:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:32.966107 | orchestrator | 2026-04-09 02:16:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:32.967246 | orchestrator | 2026-04-09 02:16:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:32.967282 | orchestrator | 2026-04-09 02:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:36.017256 | orchestrator | 2026-04-09 02:16:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:36.019443 | orchestrator | 2026-04-09 02:16:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:36.019496 | orchestrator | 2026-04-09 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:39.068279 | orchestrator | 2026-04-09 02:16:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:39.069189 | orchestrator | 2026-04-09 02:16:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:39.069218 | orchestrator | 2026-04-09 02:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:42.114723 | orchestrator | 2026-04-09 02:16:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:42.117129 | orchestrator | 2026-04-09 02:16:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:42.117234 | orchestrator | 2026-04-09 02:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:45.163815 | orchestrator | 2026-04-09 02:16:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:45.164221 | orchestrator | 2026-04-09 02:16:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:45.164243 | orchestrator | 2026-04-09 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:48.203470 | orchestrator | 2026-04-09 02:16:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:48.204344 | orchestrator | 2026-04-09 02:16:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:48.204370 | orchestrator | 2026-04-09 02:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:51.246746 | orchestrator | 2026-04-09 02:16:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:51.248979 | orchestrator | 2026-04-09 02:16:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:51.249107 | orchestrator | 2026-04-09 02:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:54.294535 | orchestrator | 2026-04-09 02:16:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:54.295973 | orchestrator | 2026-04-09 02:16:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:54.296010 | orchestrator | 2026-04-09 02:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:16:57.344888 | orchestrator | 2026-04-09 02:16:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:16:57.347255 | orchestrator | 2026-04-09 02:16:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:16:57.347401 | orchestrator | 2026-04-09 02:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:00.399828 | orchestrator | 2026-04-09 02:17:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:00.400519 | orchestrator | 2026-04-09 02:17:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:00.400606 | orchestrator | 2026-04-09 02:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:03.442864 | orchestrator | 2026-04-09 02:17:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:03.445260 | orchestrator | 2026-04-09 02:17:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:03.445319 | orchestrator | 2026-04-09 02:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:06.486633 | orchestrator | 2026-04-09 02:17:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:06.488846 | orchestrator | 2026-04-09 02:17:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:06.488899 | orchestrator | 2026-04-09 02:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:09.532314 | orchestrator | 2026-04-09 02:17:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:09.532456 | orchestrator | 2026-04-09 02:17:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:09.532472 | orchestrator | 2026-04-09 02:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:12.578433 | orchestrator | 2026-04-09 02:17:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:12.579580 | orchestrator | 2026-04-09 02:17:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:12.579834 | orchestrator | 2026-04-09 02:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:15.624144 | orchestrator | 2026-04-09 02:17:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:15.625738 | orchestrator | 2026-04-09 02:17:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:15.625779 | orchestrator | 2026-04-09 02:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:18.665643 | orchestrator | 2026-04-09 02:17:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:18.667616 | orchestrator | 2026-04-09 02:17:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:18.667686 | orchestrator | 2026-04-09 02:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:21.713484 | orchestrator | 2026-04-09 02:17:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:21.714857 | orchestrator | 2026-04-09 02:17:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:21.714889 | orchestrator | 2026-04-09 02:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:24.762901 | orchestrator | 2026-04-09 02:17:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:24.765418 | orchestrator | 2026-04-09 02:17:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:24.765481 | orchestrator | 2026-04-09 02:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:27.811292 | orchestrator | 2026-04-09 02:17:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:27.814270 | orchestrator | 2026-04-09 02:17:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:27.814338 | orchestrator | 2026-04-09 02:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:30.860849 | orchestrator | 2026-04-09 02:17:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:30.863372 | orchestrator | 2026-04-09 02:17:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:30.863479 | orchestrator | 2026-04-09 02:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:33.910519 | orchestrator | 2026-04-09 02:17:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:33.912449 | orchestrator | 2026-04-09 02:17:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:33.912726 | orchestrator | 2026-04-09 02:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:36.958727 | orchestrator | 2026-04-09 02:17:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:36.960434 | orchestrator | 2026-04-09 02:17:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:36.960490 | orchestrator | 2026-04-09 02:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:40.007347 | orchestrator | 2026-04-09 02:17:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:40.009620 | orchestrator | 2026-04-09 02:17:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:40.009684 | orchestrator | 2026-04-09 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:43.055205 | orchestrator | 2026-04-09 02:17:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:43.057588 | orchestrator | 2026-04-09 02:17:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:43.057705 | orchestrator | 2026-04-09 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:46.103796 | orchestrator | 2026-04-09 02:17:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:46.105149 | orchestrator | 2026-04-09 02:17:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:46.105215 | orchestrator | 2026-04-09 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:49.154570 | orchestrator | 2026-04-09 02:17:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:49.156473 | orchestrator | 2026-04-09 02:17:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:49.156525 | orchestrator | 2026-04-09 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:52.210764 | orchestrator | 2026-04-09 02:17:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:52.214586 | orchestrator | 2026-04-09 02:17:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:52.214661 | orchestrator | 2026-04-09 02:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:55.265905 | orchestrator | 2026-04-09 02:17:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:55.267479 | orchestrator | 2026-04-09 02:17:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:55.267534 | orchestrator | 2026-04-09 02:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:17:58.312391 | orchestrator | 2026-04-09 02:17:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:17:58.313615 | orchestrator | 2026-04-09 02:17:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:17:58.313649 | orchestrator | 2026-04-09 02:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:01.359513 | orchestrator | 2026-04-09 02:18:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:01.363008 | orchestrator | 2026-04-09 02:18:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:01.363088 | orchestrator | 2026-04-09 02:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:04.396127 | orchestrator | 2026-04-09 02:18:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:04.398010 | orchestrator | 2026-04-09 02:18:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:04.398125 | orchestrator | 2026-04-09 02:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:07.444590 | orchestrator | 2026-04-09 02:18:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:07.445856 | orchestrator | 2026-04-09 02:18:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:07.445881 | orchestrator | 2026-04-09 02:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:10.490484 | orchestrator | 2026-04-09 02:18:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:10.491758 | orchestrator | 2026-04-09 02:18:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:10.491819 | orchestrator | 2026-04-09 02:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:13.536358 | orchestrator | 2026-04-09 02:18:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:13.537842 | orchestrator | 2026-04-09 02:18:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:13.537910 | orchestrator | 2026-04-09 02:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:16.574272 | orchestrator | 2026-04-09 02:18:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:16.576744 | orchestrator | 2026-04-09 02:18:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:16.576846 | orchestrator | 2026-04-09 02:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:19.619318 | orchestrator | 2026-04-09 02:18:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:19.621859 | orchestrator | 2026-04-09 02:18:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:19.621921 | orchestrator | 2026-04-09 02:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:22.663323 | orchestrator | 2026-04-09 02:18:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:22.664848 | orchestrator | 2026-04-09 02:18:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:22.664891 | orchestrator | 2026-04-09 02:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:25.708870 | orchestrator | 2026-04-09 02:18:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:25.710299 | orchestrator | 2026-04-09 02:18:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:25.710357 | orchestrator | 2026-04-09 02:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:28.749674 | orchestrator | 2026-04-09 02:18:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:28.751206 | orchestrator | 2026-04-09 02:18:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:28.751280 | orchestrator | 2026-04-09 02:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:31.792503 | orchestrator | 2026-04-09 02:18:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:31.795736 | orchestrator | 2026-04-09 02:18:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:31.795836 | orchestrator | 2026-04-09 02:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:34.836132 | orchestrator | 2026-04-09 02:18:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:34.838374 | orchestrator | 2026-04-09 02:18:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:34.838435 | orchestrator | 2026-04-09 02:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:37.880282 | orchestrator | 2026-04-09 02:18:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:37.881995 | orchestrator | 2026-04-09 02:18:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:37.882156 | orchestrator | 2026-04-09 02:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:40.931689 | orchestrator | 2026-04-09 02:18:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:40.933940 | orchestrator | 2026-04-09 02:18:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:40.934079 | orchestrator | 2026-04-09 02:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:43.980413 | orchestrator | 2026-04-09 02:18:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:43.981657 | orchestrator | 2026-04-09 02:18:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:43.981765 | orchestrator | 2026-04-09 02:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:47.028319 | orchestrator | 2026-04-09 02:18:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:47.029937 | orchestrator | 2026-04-09 02:18:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:47.030000 | orchestrator | 2026-04-09 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:50.077193 | orchestrator | 2026-04-09 02:18:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:50.078486 | orchestrator | 2026-04-09 02:18:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:50.078863 | orchestrator | 2026-04-09 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:53.130011 | orchestrator | 2026-04-09 02:18:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:53.131295 | orchestrator | 2026-04-09 02:18:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:53.131331 | orchestrator | 2026-04-09 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:56.179108 | orchestrator | 2026-04-09 02:18:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:56.180827 | orchestrator | 2026-04-09 02:18:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:56.180917 | orchestrator | 2026-04-09 02:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:18:59.224673 | orchestrator | 2026-04-09 02:18:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:18:59.225494 | orchestrator | 2026-04-09 02:18:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:18:59.225550 | orchestrator | 2026-04-09 02:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:02.258471 | orchestrator | 2026-04-09 02:19:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:02.258858 | orchestrator | 2026-04-09 02:19:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:02.258939 | orchestrator | 2026-04-09 02:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:05.301231 | orchestrator | 2026-04-09 02:19:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:05.302153 | orchestrator | 2026-04-09 02:19:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:05.302200 | orchestrator | 2026-04-09 02:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:08.349455 | orchestrator | 2026-04-09 02:19:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:08.351665 | orchestrator | 2026-04-09 02:19:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:08.351709 | orchestrator | 2026-04-09 02:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:11.399936 | orchestrator | 2026-04-09 02:19:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:11.609967 | orchestrator | 2026-04-09 02:19:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:11.610108 | orchestrator | 2026-04-09 02:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:14.447017 | orchestrator | 2026-04-09 02:19:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:14.447526 | orchestrator | 2026-04-09 02:19:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:14.447563 | orchestrator | 2026-04-09 02:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:17.493671 | orchestrator | 2026-04-09 02:19:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:17.495197 | orchestrator | 2026-04-09 02:19:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:17.495250 | orchestrator | 2026-04-09 02:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:20.531894 | orchestrator | 2026-04-09 02:19:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:20.532520 | orchestrator | 2026-04-09 02:19:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:20.532587 | orchestrator | 2026-04-09 02:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:23.567768 | orchestrator | 2026-04-09 02:19:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:23.568863 | orchestrator | 2026-04-09 02:19:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:23.568882 | orchestrator | 2026-04-09 02:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:26.613025 | orchestrator | 2026-04-09 02:19:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:26.614571 | orchestrator | 2026-04-09 02:19:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:26.614608 | orchestrator | 2026-04-09 02:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:29.656326 | orchestrator | 2026-04-09 02:19:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:29.657618 | orchestrator | 2026-04-09 02:19:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:29.657656 | orchestrator | 2026-04-09 02:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:32.697185 | orchestrator | 2026-04-09 02:19:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:32.698133 | orchestrator | 2026-04-09 02:19:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:32.698164 | orchestrator | 2026-04-09 02:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:35.742404 | orchestrator | 2026-04-09 02:19:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:35.743514 | orchestrator | 2026-04-09 02:19:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:35.743565 | orchestrator | 2026-04-09 02:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:38.789722 | orchestrator | 2026-04-09 02:19:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:38.792333 | orchestrator | 2026-04-09 02:19:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:38.792387 | orchestrator | 2026-04-09 02:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:41.838917 | orchestrator | 2026-04-09 02:19:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:41.839553 | orchestrator | 2026-04-09 02:19:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:41.839582 | orchestrator | 2026-04-09 02:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:44.888262 | orchestrator | 2026-04-09 02:19:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:44.890006 | orchestrator | 2026-04-09 02:19:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:44.890126 | orchestrator | 2026-04-09 02:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:47.941697 | orchestrator | 2026-04-09 02:19:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:47.943657 | orchestrator | 2026-04-09 02:19:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:47.943728 | orchestrator | 2026-04-09 02:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:50.989358 | orchestrator | 2026-04-09 02:19:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:50.990919 | orchestrator | 2026-04-09 02:19:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:50.990997 | orchestrator | 2026-04-09 02:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:54.032831 | orchestrator | 2026-04-09 02:19:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:54.035712 | orchestrator | 2026-04-09 02:19:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:54.035834 | orchestrator | 2026-04-09 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:19:57.077183 | orchestrator | 2026-04-09 02:19:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:19:57.079050 | orchestrator | 2026-04-09 02:19:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:19:57.079216 | orchestrator | 2026-04-09 02:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:00.120298 | orchestrator | 2026-04-09 02:20:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:00.120596 | orchestrator | 2026-04-09 02:20:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:00.120636 | orchestrator | 2026-04-09 02:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:03.166126 | orchestrator | 2026-04-09 02:20:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:03.168642 | orchestrator | 2026-04-09 02:20:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:03.168723 | orchestrator | 2026-04-09 02:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:06.208575 | orchestrator | 2026-04-09 02:20:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:06.209775 | orchestrator | 2026-04-09 02:20:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:06.210355 | orchestrator | 2026-04-09 02:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:09.258180 | orchestrator | 2026-04-09 02:20:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:09.261463 | orchestrator | 2026-04-09 02:20:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:09.261626 | orchestrator | 2026-04-09 02:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:12.303812 | orchestrator | 2026-04-09 02:20:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:12.305628 | orchestrator | 2026-04-09 02:20:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:12.305712 | orchestrator | 2026-04-09 02:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:15.354952 | orchestrator | 2026-04-09 02:20:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:15.357186 | orchestrator | 2026-04-09 02:20:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:15.357237 | orchestrator | 2026-04-09 02:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:18.403972 | orchestrator | 2026-04-09 02:20:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:18.406654 | orchestrator | 2026-04-09 02:20:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:18.406737 | orchestrator | 2026-04-09 02:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:21.442807 | orchestrator | 2026-04-09 02:20:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:21.444039 | orchestrator | 2026-04-09 02:20:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:21.444158 | orchestrator | 2026-04-09 02:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:24.495660 | orchestrator | 2026-04-09 02:20:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:24.496844 | orchestrator | 2026-04-09 02:20:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:24.496964 | orchestrator | 2026-04-09 02:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:27.544247 | orchestrator | 2026-04-09 02:20:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:27.545713 | orchestrator | 2026-04-09 02:20:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:27.545768 | orchestrator | 2026-04-09 02:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:30.591818 | orchestrator | 2026-04-09 02:20:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:30.593682 | orchestrator | 2026-04-09 02:20:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:30.593715 | orchestrator | 2026-04-09 02:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:33.644139 | orchestrator | 2026-04-09 02:20:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:33.646742 | orchestrator | 2026-04-09 02:20:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:33.646805 | orchestrator | 2026-04-09 02:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:36.683020 | orchestrator | 2026-04-09 02:20:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:36.684968 | orchestrator | 2026-04-09 02:20:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:36.685242 | orchestrator | 2026-04-09 02:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:39.723243 | orchestrator | 2026-04-09 02:20:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:39.724627 | orchestrator | 2026-04-09 02:20:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:39.724920 | orchestrator | 2026-04-09 02:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:42.774575 | orchestrator | 2026-04-09 02:20:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:42.776625 | orchestrator | 2026-04-09 02:20:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:42.776686 | orchestrator | 2026-04-09 02:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:45.816703 | orchestrator | 2026-04-09 02:20:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:45.818851 | orchestrator | 2026-04-09 02:20:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:45.818928 | orchestrator | 2026-04-09 02:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:48.862305 | orchestrator | 2026-04-09 02:20:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:48.864702 | orchestrator | 2026-04-09 02:20:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:48.864752 | orchestrator | 2026-04-09 02:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:51.905101 | orchestrator | 2026-04-09 02:20:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:51.906064 | orchestrator | 2026-04-09 02:20:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:51.906125 | orchestrator | 2026-04-09 02:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:54.975045 | orchestrator | 2026-04-09 02:20:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:54.976675 | orchestrator | 2026-04-09 02:20:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:54.976775 | orchestrator | 2026-04-09 02:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:20:58.017261 | orchestrator | 2026-04-09 02:20:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:20:58.018424 | orchestrator | 2026-04-09 02:20:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:20:58.019460 | orchestrator | 2026-04-09 02:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:01.054130 | orchestrator | 2026-04-09 02:21:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:01.055050 | orchestrator | 2026-04-09 02:21:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:01.055144 | orchestrator | 2026-04-09 02:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:04.100424 | orchestrator | 2026-04-09 02:21:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:04.101721 | orchestrator | 2026-04-09 02:21:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:04.101770 | orchestrator | 2026-04-09 02:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:07.138461 | orchestrator | 2026-04-09 02:21:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:07.139263 | orchestrator | 2026-04-09 02:21:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:07.139419 | orchestrator | 2026-04-09 02:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:10.185349 | orchestrator | 2026-04-09 02:21:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:10.187110 | orchestrator | 2026-04-09 02:21:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:10.187188 | orchestrator | 2026-04-09 02:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:13.234890 | orchestrator | 2026-04-09 02:21:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:13.236309 | orchestrator | 2026-04-09 02:21:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:13.236389 | orchestrator | 2026-04-09 02:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:16.276966 | orchestrator | 2026-04-09 02:21:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:16.278259 | orchestrator | 2026-04-09 02:21:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:16.278344 | orchestrator | 2026-04-09 02:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:19.313267 | orchestrator | 2026-04-09 02:21:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:19.314537 | orchestrator | 2026-04-09 02:21:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:19.314573 | orchestrator | 2026-04-09 02:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:22.356860 | orchestrator | 2026-04-09 02:21:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:22.358284 | orchestrator | 2026-04-09 02:21:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:22.358340 | orchestrator | 2026-04-09 02:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:25.400372 | orchestrator | 2026-04-09 02:21:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:25.402112 | orchestrator | 2026-04-09 02:21:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:25.402145 | orchestrator | 2026-04-09 02:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:28.442326 | orchestrator | 2026-04-09 02:21:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:28.443787 | orchestrator | 2026-04-09 02:21:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:28.443807 | orchestrator | 2026-04-09 02:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:31.491422 | orchestrator | 2026-04-09 02:21:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:31.492737 | orchestrator | 2026-04-09 02:21:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:31.492828 | orchestrator | 2026-04-09 02:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:34.531231 | orchestrator | 2026-04-09 02:21:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:34.534913 | orchestrator | 2026-04-09 02:21:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:34.534978 | orchestrator | 2026-04-09 02:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:37.585953 | orchestrator | 2026-04-09 02:21:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:37.589930 | orchestrator | 2026-04-09 02:21:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:37.590115 | orchestrator | 2026-04-09 02:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:40.640842 | orchestrator | 2026-04-09 02:21:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:40.644164 | orchestrator | 2026-04-09 02:21:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:40.644214 | orchestrator | 2026-04-09 02:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:43.689474 | orchestrator | 2026-04-09 02:21:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:43.690346 | orchestrator | 2026-04-09 02:21:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:43.690382 | orchestrator | 2026-04-09 02:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:46.732814 | orchestrator | 2026-04-09 02:21:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:46.732915 | orchestrator | 2026-04-09 02:21:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:46.732927 | orchestrator | 2026-04-09 02:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:49.773082 | orchestrator | 2026-04-09 02:21:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:49.773710 | orchestrator | 2026-04-09 02:21:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:49.773741 | orchestrator | 2026-04-09 02:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:52.807420 | orchestrator | 2026-04-09 02:21:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:52.808457 | orchestrator | 2026-04-09 02:21:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:52.808504 | orchestrator | 2026-04-09 02:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:55.849135 | orchestrator | 2026-04-09 02:21:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:55.851577 | orchestrator | 2026-04-09 02:21:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:55.851649 | orchestrator | 2026-04-09 02:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:21:58.900193 | orchestrator | 2026-04-09 02:21:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:21:58.901984 | orchestrator | 2026-04-09 02:21:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:21:58.902127 | orchestrator | 2026-04-09 02:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:01.948353 | orchestrator | 2026-04-09 02:22:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:01.950470 | orchestrator | 2026-04-09 02:22:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:01.950747 | orchestrator | 2026-04-09 02:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:04.984893 | orchestrator | 2026-04-09 02:22:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:04.985416 | orchestrator | 2026-04-09 02:22:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:04.985441 | orchestrator | 2026-04-09 02:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:08.021679 | orchestrator | 2026-04-09 02:22:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:08.021796 | orchestrator | 2026-04-09 02:22:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:08.021845 | orchestrator | 2026-04-09 02:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:11.058741 | orchestrator | 2026-04-09 02:22:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:11.060336 | orchestrator | 2026-04-09 02:22:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:11.060567 | orchestrator | 2026-04-09 02:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:14.107523 | orchestrator | 2026-04-09 02:22:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:14.109486 | orchestrator | 2026-04-09 02:22:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:14.109537 | orchestrator | 2026-04-09 02:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:17.147743 | orchestrator | 2026-04-09 02:22:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:17.148725 | orchestrator | 2026-04-09 02:22:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:17.148786 | orchestrator | 2026-04-09 02:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:20.188861 | orchestrator | 2026-04-09 02:22:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:20.191697 | orchestrator | 2026-04-09 02:22:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:20.191756 | orchestrator | 2026-04-09 02:22:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:23.236297 | orchestrator | 2026-04-09 02:22:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:23.237535 | orchestrator | 2026-04-09 02:22:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:23.237591 | orchestrator | 2026-04-09 02:22:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:26.291761 | orchestrator | 2026-04-09 02:22:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:26.293298 | orchestrator | 2026-04-09 02:22:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:26.293391 | orchestrator | 2026-04-09 02:22:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:29.342973 | orchestrator | 2026-04-09 02:22:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:29.344509 | orchestrator | 2026-04-09 02:22:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:29.344578 | orchestrator | 2026-04-09 02:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:32.397801 | orchestrator | 2026-04-09 02:22:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:32.399038 | orchestrator | 2026-04-09 02:22:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:32.399095 | orchestrator | 2026-04-09 02:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:35.446631 | orchestrator | 2026-04-09 02:22:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:35.448485 | orchestrator | 2026-04-09 02:22:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:35.448518 | orchestrator | 2026-04-09 02:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:38.499242 | orchestrator | 2026-04-09 02:22:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:38.502407 | orchestrator | 2026-04-09 02:22:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:38.502483 | orchestrator | 2026-04-09 02:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:41.549657 | orchestrator | 2026-04-09 02:22:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:41.550898 | orchestrator | 2026-04-09 02:22:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:41.550947 | orchestrator | 2026-04-09 02:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:44.592249 | orchestrator | 2026-04-09 02:22:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:44.592983 | orchestrator | 2026-04-09 02:22:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:44.593033 | orchestrator | 2026-04-09 02:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:47.635528 | orchestrator | 2026-04-09 02:22:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:47.636431 | orchestrator | 2026-04-09 02:22:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:47.636454 | orchestrator | 2026-04-09 02:22:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:50.684811 | orchestrator | 2026-04-09 02:22:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:50.686432 | orchestrator | 2026-04-09 02:22:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:50.686492 | orchestrator | 2026-04-09 02:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:53.739636 | orchestrator | 2026-04-09 02:22:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:53.741072 | orchestrator | 2026-04-09 02:22:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:53.741447 | orchestrator | 2026-04-09 02:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:56.783692 | orchestrator | 2026-04-09 02:22:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:56.786896 | orchestrator | 2026-04-09 02:22:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:56.786977 | orchestrator | 2026-04-09 02:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:22:59.839591 | orchestrator | 2026-04-09 02:22:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:22:59.840048 | orchestrator | 2026-04-09 02:22:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:22:59.840077 | orchestrator | 2026-04-09 02:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:02.879319 | orchestrator | 2026-04-09 02:23:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:02.880214 | orchestrator | 2026-04-09 02:23:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:02.880297 | orchestrator | 2026-04-09 02:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:05.925978 | orchestrator | 2026-04-09 02:23:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:05.927889 | orchestrator | 2026-04-09 02:23:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:05.928190 | orchestrator | 2026-04-09 02:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:08.979417 | orchestrator | 2026-04-09 02:23:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:08.981091 | orchestrator | 2026-04-09 02:23:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:08.981162 | orchestrator | 2026-04-09 02:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:12.028872 | orchestrator | 2026-04-09 02:23:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:12.029462 | orchestrator | 2026-04-09 02:23:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:12.029551 | orchestrator | 2026-04-09 02:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:15.073829 | orchestrator | 2026-04-09 02:23:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:15.076578 | orchestrator | 2026-04-09 02:23:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:15.076646 | orchestrator | 2026-04-09 02:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:18.122578 | orchestrator | 2026-04-09 02:23:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:18.124918 | orchestrator | 2026-04-09 02:23:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:18.124967 | orchestrator | 2026-04-09 02:23:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:21.164846 | orchestrator | 2026-04-09 02:23:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:21.166597 | orchestrator | 2026-04-09 02:23:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:21.166693 | orchestrator | 2026-04-09 02:23:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:24.222272 | orchestrator | 2026-04-09 02:23:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:24.222710 | orchestrator | 2026-04-09 02:23:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:24.222903 | orchestrator | 2026-04-09 02:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:27.270342 | orchestrator | 2026-04-09 02:23:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:27.271476 | orchestrator | 2026-04-09 02:23:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:27.271538 | orchestrator | 2026-04-09 02:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:30.322647 | orchestrator | 2026-04-09 02:23:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:30.322825 | orchestrator | 2026-04-09 02:23:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:30.322848 | orchestrator | 2026-04-09 02:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:33.359481 | orchestrator | 2026-04-09 02:23:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:33.361832 | orchestrator | 2026-04-09 02:23:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:33.361952 | orchestrator | 2026-04-09 02:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:36.411014 | orchestrator | 2026-04-09 02:23:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:36.413842 | orchestrator | 2026-04-09 02:23:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:36.413901 | orchestrator | 2026-04-09 02:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:39.464710 | orchestrator | 2026-04-09 02:23:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:39.465767 | orchestrator | 2026-04-09 02:23:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:39.465800 | orchestrator | 2026-04-09 02:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:42.508836 | orchestrator | 2026-04-09 02:23:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:42.511433 | orchestrator | 2026-04-09 02:23:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:42.511508 | orchestrator | 2026-04-09 02:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:45.545084 | orchestrator | 2026-04-09 02:23:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:45.546508 | orchestrator | 2026-04-09 02:23:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:45.546560 | orchestrator | 2026-04-09 02:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:48.591902 | orchestrator | 2026-04-09 02:23:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:48.593404 | orchestrator | 2026-04-09 02:23:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:48.593445 | orchestrator | 2026-04-09 02:23:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:51.637563 | orchestrator | 2026-04-09 02:23:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:51.639261 | orchestrator | 2026-04-09 02:23:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:51.639380 | orchestrator | 2026-04-09 02:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:54.678868 | orchestrator | 2026-04-09 02:23:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:54.679004 | orchestrator | 2026-04-09 02:23:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:54.679020 | orchestrator | 2026-04-09 02:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:23:57.722169 | orchestrator | 2026-04-09 02:23:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:23:57.723139 | orchestrator | 2026-04-09 02:23:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:23:57.723243 | orchestrator | 2026-04-09 02:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:00.779528 | orchestrator | 2026-04-09 02:24:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:00.781215 | orchestrator | 2026-04-09 02:24:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:00.781257 | orchestrator | 2026-04-09 02:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:03.828245 | orchestrator | 2026-04-09 02:24:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:03.829959 | orchestrator | 2026-04-09 02:24:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:03.830080 | orchestrator | 2026-04-09 02:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:06.874812 | orchestrator | 2026-04-09 02:24:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:06.876113 | orchestrator | 2026-04-09 02:24:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:06.876465 | orchestrator | 2026-04-09 02:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:09.923744 | orchestrator | 2026-04-09 02:24:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:09.925692 | orchestrator | 2026-04-09 02:24:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:09.925746 | orchestrator | 2026-04-09 02:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:12.972739 | orchestrator | 2026-04-09 02:24:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:12.973993 | orchestrator | 2026-04-09 02:24:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:12.974078 | orchestrator | 2026-04-09 02:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:16.022458 | orchestrator | 2026-04-09 02:24:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:16.024212 | orchestrator | 2026-04-09 02:24:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:16.024268 | orchestrator | 2026-04-09 02:24:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:19.070391 | orchestrator | 2026-04-09 02:24:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:19.072945 | orchestrator | 2026-04-09 02:24:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:19.073027 | orchestrator | 2026-04-09 02:24:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:22.123050 | orchestrator | 2026-04-09 02:24:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:22.125057 | orchestrator | 2026-04-09 02:24:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:22.125148 | orchestrator | 2026-04-09 02:24:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:25.174272 | orchestrator | 2026-04-09 02:24:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:25.176975 | orchestrator | 2026-04-09 02:24:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:25.177024 | orchestrator | 2026-04-09 02:24:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:28.225732 | orchestrator | 2026-04-09 02:24:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:28.227326 | orchestrator | 2026-04-09 02:24:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:28.227543 | orchestrator | 2026-04-09 02:24:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:31.273219 | orchestrator | 2026-04-09 02:24:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:31.276029 | orchestrator | 2026-04-09 02:24:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:31.276114 | orchestrator | 2026-04-09 02:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:34.316751 | orchestrator | 2026-04-09 02:24:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:34.318771 | orchestrator | 2026-04-09 02:24:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:34.318858 | orchestrator | 2026-04-09 02:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:37.363102 | orchestrator | 2026-04-09 02:24:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:37.365493 | orchestrator | 2026-04-09 02:24:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:37.365704 | orchestrator | 2026-04-09 02:24:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:40.407260 | orchestrator | 2026-04-09 02:24:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:40.408094 | orchestrator | 2026-04-09 02:24:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:40.408150 | orchestrator | 2026-04-09 02:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:43.454972 | orchestrator | 2026-04-09 02:24:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:43.457000 | orchestrator | 2026-04-09 02:24:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:43.457331 | orchestrator | 2026-04-09 02:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:46.505115 | orchestrator | 2026-04-09 02:24:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:46.505787 | orchestrator | 2026-04-09 02:24:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:46.505825 | orchestrator | 2026-04-09 02:24:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:49.543512 | orchestrator | 2026-04-09 02:24:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:49.545150 | orchestrator | 2026-04-09 02:24:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:49.545181 | orchestrator | 2026-04-09 02:24:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:52.580168 | orchestrator | 2026-04-09 02:24:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:52.581258 | orchestrator | 2026-04-09 02:24:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:52.581316 | orchestrator | 2026-04-09 02:24:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:55.621785 | orchestrator | 2026-04-09 02:24:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:55.623779 | orchestrator | 2026-04-09 02:24:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:55.623832 | orchestrator | 2026-04-09 02:24:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:24:58.661724 | orchestrator | 2026-04-09 02:24:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:24:58.662904 | orchestrator | 2026-04-09 02:24:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:24:58.662947 | orchestrator | 2026-04-09 02:24:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:01.709219 | orchestrator | 2026-04-09 02:25:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:01.709957 | orchestrator | 2026-04-09 02:25:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:01.709988 | orchestrator | 2026-04-09 02:25:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:04.757063 | orchestrator | 2026-04-09 02:25:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:04.759733 | orchestrator | 2026-04-09 02:25:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:04.759803 | orchestrator | 2026-04-09 02:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:07.802450 | orchestrator | 2026-04-09 02:25:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:07.803544 | orchestrator | 2026-04-09 02:25:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:07.803695 | orchestrator | 2026-04-09 02:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:10.855104 | orchestrator | 2026-04-09 02:25:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:10.857578 | orchestrator | 2026-04-09 02:25:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:10.857703 | orchestrator | 2026-04-09 02:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:13.901347 | orchestrator | 2026-04-09 02:25:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:13.902217 | orchestrator | 2026-04-09 02:25:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:13.902307 | orchestrator | 2026-04-09 02:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:16.952881 | orchestrator | 2026-04-09 02:25:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:16.954261 | orchestrator | 2026-04-09 02:25:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:16.954447 | orchestrator | 2026-04-09 02:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:19.993094 | orchestrator | 2026-04-09 02:25:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:19.993845 | orchestrator | 2026-04-09 02:25:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:19.993896 | orchestrator | 2026-04-09 02:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:23.046248 | orchestrator | 2026-04-09 02:25:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:23.047873 | orchestrator | 2026-04-09 02:25:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:23.047905 | orchestrator | 2026-04-09 02:25:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:26.095792 | orchestrator | 2026-04-09 02:25:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:26.097878 | orchestrator | 2026-04-09 02:25:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:26.097960 | orchestrator | 2026-04-09 02:25:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:29.150212 | orchestrator | 2026-04-09 02:25:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:29.153034 | orchestrator | 2026-04-09 02:25:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:29.153091 | orchestrator | 2026-04-09 02:25:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:32.206880 | orchestrator | 2026-04-09 02:25:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:32.208006 | orchestrator | 2026-04-09 02:25:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:32.208088 | orchestrator | 2026-04-09 02:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:35.256074 | orchestrator | 2026-04-09 02:25:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:35.258663 | orchestrator | 2026-04-09 02:25:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:35.258758 | orchestrator | 2026-04-09 02:25:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:38.303420 | orchestrator | 2026-04-09 02:25:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:38.306061 | orchestrator | 2026-04-09 02:25:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:38.306186 | orchestrator | 2026-04-09 02:25:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:41.340218 | orchestrator | 2026-04-09 02:25:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:41.341042 | orchestrator | 2026-04-09 02:25:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:41.341092 | orchestrator | 2026-04-09 02:25:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:44.390339 | orchestrator | 2026-04-09 02:25:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:44.392149 | orchestrator | 2026-04-09 02:25:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:44.392676 | orchestrator | 2026-04-09 02:25:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:47.454126 | orchestrator | 2026-04-09 02:25:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:47.456857 | orchestrator | 2026-04-09 02:25:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:47.456899 | orchestrator | 2026-04-09 02:25:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:50.504651 | orchestrator | 2026-04-09 02:25:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:50.507219 | orchestrator | 2026-04-09 02:25:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:50.507278 | orchestrator | 2026-04-09 02:25:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:53.563463 | orchestrator | 2026-04-09 02:25:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:53.566410 | orchestrator | 2026-04-09 02:25:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:53.566568 | orchestrator | 2026-04-09 02:25:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:56.623906 | orchestrator | 2026-04-09 02:25:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:56.625887 | orchestrator | 2026-04-09 02:25:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:56.626099 | orchestrator | 2026-04-09 02:25:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:25:59.674449 | orchestrator | 2026-04-09 02:25:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:25:59.675899 | orchestrator | 2026-04-09 02:25:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:25:59.675942 | orchestrator | 2026-04-09 02:25:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:02.734115 | orchestrator | 2026-04-09 02:26:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:02.735073 | orchestrator | 2026-04-09 02:26:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:02.735146 | orchestrator | 2026-04-09 02:26:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:05.779534 | orchestrator | 2026-04-09 02:26:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:05.781233 | orchestrator | 2026-04-09 02:26:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:05.781309 | orchestrator | 2026-04-09 02:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:08.831429 | orchestrator | 2026-04-09 02:26:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:08.834971 | orchestrator | 2026-04-09 02:26:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:08.835087 | orchestrator | 2026-04-09 02:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:11.884663 | orchestrator | 2026-04-09 02:26:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:11.888015 | orchestrator | 2026-04-09 02:26:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:11.888075 | orchestrator | 2026-04-09 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:14.936135 | orchestrator | 2026-04-09 02:26:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:14.937261 | orchestrator | 2026-04-09 02:26:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:14.937306 | orchestrator | 2026-04-09 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:17.980715 | orchestrator | 2026-04-09 02:26:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:17.981901 | orchestrator | 2026-04-09 02:26:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:17.981949 | orchestrator | 2026-04-09 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:21.025770 | orchestrator | 2026-04-09 02:26:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:21.027173 | orchestrator | 2026-04-09 02:26:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:21.027368 | orchestrator | 2026-04-09 02:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:24.075636 | orchestrator | 2026-04-09 02:26:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:24.076149 | orchestrator | 2026-04-09 02:26:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:24.076169 | orchestrator | 2026-04-09 02:26:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:27.115028 | orchestrator | 2026-04-09 02:26:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:27.116026 | orchestrator | 2026-04-09 02:26:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:27.116143 | orchestrator | 2026-04-09 02:26:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:30.156946 | orchestrator | 2026-04-09 02:26:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:30.158850 | orchestrator | 2026-04-09 02:26:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:30.158925 | orchestrator | 2026-04-09 02:26:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:33.211658 | orchestrator | 2026-04-09 02:26:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:33.213818 | orchestrator | 2026-04-09 02:26:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:33.213917 | orchestrator | 2026-04-09 02:26:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:36.263831 | orchestrator | 2026-04-09 02:26:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:36.265488 | orchestrator | 2026-04-09 02:26:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:36.265528 | orchestrator | 2026-04-09 02:26:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:39.317054 | orchestrator | 2026-04-09 02:26:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:39.318922 | orchestrator | 2026-04-09 02:26:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:39.318996 | orchestrator | 2026-04-09 02:26:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:42.365478 | orchestrator | 2026-04-09 02:26:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:42.366869 | orchestrator | 2026-04-09 02:26:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:42.366908 | orchestrator | 2026-04-09 02:26:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:45.415447 | orchestrator | 2026-04-09 02:26:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:45.417366 | orchestrator | 2026-04-09 02:26:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:45.417409 | orchestrator | 2026-04-09 02:26:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:48.469118 | orchestrator | 2026-04-09 02:26:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:48.470722 | orchestrator | 2026-04-09 02:26:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:48.470787 | orchestrator | 2026-04-09 02:26:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:51.514272 | orchestrator | 2026-04-09 02:26:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:51.515796 | orchestrator | 2026-04-09 02:26:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:51.515831 | orchestrator | 2026-04-09 02:26:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:54.563662 | orchestrator | 2026-04-09 02:26:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:54.566257 | orchestrator | 2026-04-09 02:26:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:54.566333 | orchestrator | 2026-04-09 02:26:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:26:57.610760 | orchestrator | 2026-04-09 02:26:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:26:57.612685 | orchestrator | 2026-04-09 02:26:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:26:57.612744 | orchestrator | 2026-04-09 02:26:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:00.662989 | orchestrator | 2026-04-09 02:27:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:00.664599 | orchestrator | 2026-04-09 02:27:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:00.664678 | orchestrator | 2026-04-09 02:27:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:03.712246 | orchestrator | 2026-04-09 02:27:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:03.714599 | orchestrator | 2026-04-09 02:27:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:03.714779 | orchestrator | 2026-04-09 02:27:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:06.760669 | orchestrator | 2026-04-09 02:27:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:06.762108 | orchestrator | 2026-04-09 02:27:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:06.762127 | orchestrator | 2026-04-09 02:27:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:09.814398 | orchestrator | 2026-04-09 02:27:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:09.816581 | orchestrator | 2026-04-09 02:27:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:09.816702 | orchestrator | 2026-04-09 02:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:12.869933 | orchestrator | 2026-04-09 02:27:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:12.871700 | orchestrator | 2026-04-09 02:27:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:12.871765 | orchestrator | 2026-04-09 02:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:15.917109 | orchestrator | 2026-04-09 02:27:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:15.918291 | orchestrator | 2026-04-09 02:27:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:15.918402 | orchestrator | 2026-04-09 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:18.964137 | orchestrator | 2026-04-09 02:27:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:18.966212 | orchestrator | 2026-04-09 02:27:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:18.966266 | orchestrator | 2026-04-09 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:22.012162 | orchestrator | 2026-04-09 02:27:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:22.014917 | orchestrator | 2026-04-09 02:27:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:22.015019 | orchestrator | 2026-04-09 02:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:25.063933 | orchestrator | 2026-04-09 02:27:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:25.065911 | orchestrator | 2026-04-09 02:27:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:25.066011 | orchestrator | 2026-04-09 02:27:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:28.118849 | orchestrator | 2026-04-09 02:27:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:28.119840 | orchestrator | 2026-04-09 02:27:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:28.119972 | orchestrator | 2026-04-09 02:27:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:31.175803 | orchestrator | 2026-04-09 02:27:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:31.178227 | orchestrator | 2026-04-09 02:27:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:31.178315 | orchestrator | 2026-04-09 02:27:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:34.226669 | orchestrator | 2026-04-09 02:27:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:34.227217 | orchestrator | 2026-04-09 02:27:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:34.227249 | orchestrator | 2026-04-09 02:27:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:37.277453 | orchestrator | 2026-04-09 02:27:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:37.280462 | orchestrator | 2026-04-09 02:27:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:37.280564 | orchestrator | 2026-04-09 02:27:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:40.331160 | orchestrator | 2026-04-09 02:27:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:40.333045 | orchestrator | 2026-04-09 02:27:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:40.333083 | orchestrator | 2026-04-09 02:27:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:43.383423 | orchestrator | 2026-04-09 02:27:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:43.385850 | orchestrator | 2026-04-09 02:27:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:43.385906 | orchestrator | 2026-04-09 02:27:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:46.433300 | orchestrator | 2026-04-09 02:27:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:46.434782 | orchestrator | 2026-04-09 02:27:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:46.434847 | orchestrator | 2026-04-09 02:27:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:49.489217 | orchestrator | 2026-04-09 02:27:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:49.490936 | orchestrator | 2026-04-09 02:27:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:49.491008 | orchestrator | 2026-04-09 02:27:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:52.537444 | orchestrator | 2026-04-09 02:27:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:52.539092 | orchestrator | 2026-04-09 02:27:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:52.539291 | orchestrator | 2026-04-09 02:27:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:55.593624 | orchestrator | 2026-04-09 02:27:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:55.596740 | orchestrator | 2026-04-09 02:27:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:55.596819 | orchestrator | 2026-04-09 02:27:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:27:58.643872 | orchestrator | 2026-04-09 02:27:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:27:58.645800 | orchestrator | 2026-04-09 02:27:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:27:58.645858 | orchestrator | 2026-04-09 02:27:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:01.692446 | orchestrator | 2026-04-09 02:28:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:01.693201 | orchestrator | 2026-04-09 02:28:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:01.693283 | orchestrator | 2026-04-09 02:28:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:04.745021 | orchestrator | 2026-04-09 02:28:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:04.746461 | orchestrator | 2026-04-09 02:28:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:04.746529 | orchestrator | 2026-04-09 02:28:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:07.793204 | orchestrator | 2026-04-09 02:28:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:07.795293 | orchestrator | 2026-04-09 02:28:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:07.795403 | orchestrator | 2026-04-09 02:28:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:10.843008 | orchestrator | 2026-04-09 02:28:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:10.845169 | orchestrator | 2026-04-09 02:28:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:10.845210 | orchestrator | 2026-04-09 02:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:13.891054 | orchestrator | 2026-04-09 02:28:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:13.893951 | orchestrator | 2026-04-09 02:28:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:13.894084 | orchestrator | 2026-04-09 02:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:16.942918 | orchestrator | 2026-04-09 02:28:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:16.944041 | orchestrator | 2026-04-09 02:28:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:16.944153 | orchestrator | 2026-04-09 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:19.982143 | orchestrator | 2026-04-09 02:28:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:19.983897 | orchestrator | 2026-04-09 02:28:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:19.983985 | orchestrator | 2026-04-09 02:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:23.015082 | orchestrator | 2026-04-09 02:28:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:23.016101 | orchestrator | 2026-04-09 02:28:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:23.016158 | orchestrator | 2026-04-09 02:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:26.062161 | orchestrator | 2026-04-09 02:28:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:26.064418 | orchestrator | 2026-04-09 02:28:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:26.064587 | orchestrator | 2026-04-09 02:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:29.115405 | orchestrator | 2026-04-09 02:28:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:29.116582 | orchestrator | 2026-04-09 02:28:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:29.116628 | orchestrator | 2026-04-09 02:28:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:32.171980 | orchestrator | 2026-04-09 02:28:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:32.174476 | orchestrator | 2026-04-09 02:28:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:32.174580 | orchestrator | 2026-04-09 02:28:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:35.230655 | orchestrator | 2026-04-09 02:28:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:35.233087 | orchestrator | 2026-04-09 02:28:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:35.233185 | orchestrator | 2026-04-09 02:28:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:38.280096 | orchestrator | 2026-04-09 02:28:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:38.280559 | orchestrator | 2026-04-09 02:28:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:38.280598 | orchestrator | 2026-04-09 02:28:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:41.338575 | orchestrator | 2026-04-09 02:28:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:41.342149 | orchestrator | 2026-04-09 02:28:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:41.342220 | orchestrator | 2026-04-09 02:28:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:44.390586 | orchestrator | 2026-04-09 02:28:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:44.392335 | orchestrator | 2026-04-09 02:28:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:44.392396 | orchestrator | 2026-04-09 02:28:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:47.438854 | orchestrator | 2026-04-09 02:28:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:47.440694 | orchestrator | 2026-04-09 02:28:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:47.440733 | orchestrator | 2026-04-09 02:28:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:50.488582 | orchestrator | 2026-04-09 02:28:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:50.489602 | orchestrator | 2026-04-09 02:28:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:50.489635 | orchestrator | 2026-04-09 02:28:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:53.527916 | orchestrator | 2026-04-09 02:28:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:53.528761 | orchestrator | 2026-04-09 02:28:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:53.528813 | orchestrator | 2026-04-09 02:28:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:56.575456 | orchestrator | 2026-04-09 02:28:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:56.577382 | orchestrator | 2026-04-09 02:28:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:56.577462 | orchestrator | 2026-04-09 02:28:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:28:59.630722 | orchestrator | 2026-04-09 02:28:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:28:59.632714 | orchestrator | 2026-04-09 02:28:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:28:59.632766 | orchestrator | 2026-04-09 02:28:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:02.679253 | orchestrator | 2026-04-09 02:29:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:02.680329 | orchestrator | 2026-04-09 02:29:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:02.680365 | orchestrator | 2026-04-09 02:29:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:05.736365 | orchestrator | 2026-04-09 02:29:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:05.737696 | orchestrator | 2026-04-09 02:29:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:05.737746 | orchestrator | 2026-04-09 02:29:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:08.778376 | orchestrator | 2026-04-09 02:29:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:08.779608 | orchestrator | 2026-04-09 02:29:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:08.779652 | orchestrator | 2026-04-09 02:29:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:11.831428 | orchestrator | 2026-04-09 02:29:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:11.833461 | orchestrator | 2026-04-09 02:29:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:11.833534 | orchestrator | 2026-04-09 02:29:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:14.883558 | orchestrator | 2026-04-09 02:29:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:14.885404 | orchestrator | 2026-04-09 02:29:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:14.885575 | orchestrator | 2026-04-09 02:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:17.929541 | orchestrator | 2026-04-09 02:29:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:17.931106 | orchestrator | 2026-04-09 02:29:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:17.931219 | orchestrator | 2026-04-09 02:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:20.984874 | orchestrator | 2026-04-09 02:29:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:20.986111 | orchestrator | 2026-04-09 02:29:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:20.986153 | orchestrator | 2026-04-09 02:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:24.040396 | orchestrator | 2026-04-09 02:29:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:24.041982 | orchestrator | 2026-04-09 02:29:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:24.042067 | orchestrator | 2026-04-09 02:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:27.092030 | orchestrator | 2026-04-09 02:29:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:27.094101 | orchestrator | 2026-04-09 02:29:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:27.094237 | orchestrator | 2026-04-09 02:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:30.139178 | orchestrator | 2026-04-09 02:29:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:30.141239 | orchestrator | 2026-04-09 02:29:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:30.141308 | orchestrator | 2026-04-09 02:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:33.189398 | orchestrator | 2026-04-09 02:29:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:33.191691 | orchestrator | 2026-04-09 02:29:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:33.191947 | orchestrator | 2026-04-09 02:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:36.242365 | orchestrator | 2026-04-09 02:29:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:36.243129 | orchestrator | 2026-04-09 02:29:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:36.243210 | orchestrator | 2026-04-09 02:29:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:39.287740 | orchestrator | 2026-04-09 02:29:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:39.290067 | orchestrator | 2026-04-09 02:29:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:39.290121 | orchestrator | 2026-04-09 02:29:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:42.331323 | orchestrator | 2026-04-09 02:29:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:42.333731 | orchestrator | 2026-04-09 02:29:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:42.333786 | orchestrator | 2026-04-09 02:29:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:45.381586 | orchestrator | 2026-04-09 02:29:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:45.384689 | orchestrator | 2026-04-09 02:29:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:45.384768 | orchestrator | 2026-04-09 02:29:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:48.436546 | orchestrator | 2026-04-09 02:29:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:48.438115 | orchestrator | 2026-04-09 02:29:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:48.438159 | orchestrator | 2026-04-09 02:29:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:51.490780 | orchestrator | 2026-04-09 02:29:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:51.491940 | orchestrator | 2026-04-09 02:29:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:51.491986 | orchestrator | 2026-04-09 02:29:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:54.544362 | orchestrator | 2026-04-09 02:29:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:54.546781 | orchestrator | 2026-04-09 02:29:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:54.546911 | orchestrator | 2026-04-09 02:29:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:29:57.597331 | orchestrator | 2026-04-09 02:29:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:29:57.599864 | orchestrator | 2026-04-09 02:29:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:29:57.600055 | orchestrator | 2026-04-09 02:29:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:00.651021 | orchestrator | 2026-04-09 02:30:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:00.653062 | orchestrator | 2026-04-09 02:30:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:00.653117 | orchestrator | 2026-04-09 02:30:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:03.705893 | orchestrator | 2026-04-09 02:30:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:03.708224 | orchestrator | 2026-04-09 02:30:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:03.708286 | orchestrator | 2026-04-09 02:30:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:06.756422 | orchestrator | 2026-04-09 02:30:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:06.758526 | orchestrator | 2026-04-09 02:30:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:06.758589 | orchestrator | 2026-04-09 02:30:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:09.806702 | orchestrator | 2026-04-09 02:30:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:09.806776 | orchestrator | 2026-04-09 02:30:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:09.806817 | orchestrator | 2026-04-09 02:30:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:12.851098 | orchestrator | 2026-04-09 02:30:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:12.851310 | orchestrator | 2026-04-09 02:30:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:12.851329 | orchestrator | 2026-04-09 02:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:15.898768 | orchestrator | 2026-04-09 02:30:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:15.900704 | orchestrator | 2026-04-09 02:30:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:15.900761 | orchestrator | 2026-04-09 02:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:18.947269 | orchestrator | 2026-04-09 02:30:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:18.948979 | orchestrator | 2026-04-09 02:30:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:18.949213 | orchestrator | 2026-04-09 02:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:21.998678 | orchestrator | 2026-04-09 02:30:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:22.000047 | orchestrator | 2026-04-09 02:30:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:22.000109 | orchestrator | 2026-04-09 02:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:25.049565 | orchestrator | 2026-04-09 02:30:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:25.050173 | orchestrator | 2026-04-09 02:30:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:25.050207 | orchestrator | 2026-04-09 02:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:28.101312 | orchestrator | 2026-04-09 02:30:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:28.102704 | orchestrator | 2026-04-09 02:30:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:28.103165 | orchestrator | 2026-04-09 02:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:31.149799 | orchestrator | 2026-04-09 02:30:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:31.150496 | orchestrator | 2026-04-09 02:30:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:31.150531 | orchestrator | 2026-04-09 02:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:34.196290 | orchestrator | 2026-04-09 02:30:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:34.199015 | orchestrator | 2026-04-09 02:30:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:34.199087 | orchestrator | 2026-04-09 02:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:37.246233 | orchestrator | 2026-04-09 02:30:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:37.247765 | orchestrator | 2026-04-09 02:30:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:37.247854 | orchestrator | 2026-04-09 02:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:40.296615 | orchestrator | 2026-04-09 02:30:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:40.297943 | orchestrator | 2026-04-09 02:30:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:40.297991 | orchestrator | 2026-04-09 02:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:43.343111 | orchestrator | 2026-04-09 02:30:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:43.344895 | orchestrator | 2026-04-09 02:30:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:43.344969 | orchestrator | 2026-04-09 02:30:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:46.399057 | orchestrator | 2026-04-09 02:30:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:46.403677 | orchestrator | 2026-04-09 02:30:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:46.403761 | orchestrator | 2026-04-09 02:30:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:49.452704 | orchestrator | 2026-04-09 02:30:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:49.454172 | orchestrator | 2026-04-09 02:30:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:49.454210 | orchestrator | 2026-04-09 02:30:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:52.505310 | orchestrator | 2026-04-09 02:30:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:52.507911 | orchestrator | 2026-04-09 02:30:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:52.507978 | orchestrator | 2026-04-09 02:30:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:55.555794 | orchestrator | 2026-04-09 02:30:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:55.556879 | orchestrator | 2026-04-09 02:30:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:55.556932 | orchestrator | 2026-04-09 02:30:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:30:58.604076 | orchestrator | 2026-04-09 02:30:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:30:58.606140 | orchestrator | 2026-04-09 02:30:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:30:58.606235 | orchestrator | 2026-04-09 02:30:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:01.650131 | orchestrator | 2026-04-09 02:31:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:01.653276 | orchestrator | 2026-04-09 02:31:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:01.653362 | orchestrator | 2026-04-09 02:31:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:04.701911 | orchestrator | 2026-04-09 02:31:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:04.703216 | orchestrator | 2026-04-09 02:31:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:04.703252 | orchestrator | 2026-04-09 02:31:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:07.755903 | orchestrator | 2026-04-09 02:31:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:07.757899 | orchestrator | 2026-04-09 02:31:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:07.757970 | orchestrator | 2026-04-09 02:31:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:10.808482 | orchestrator | 2026-04-09 02:31:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:10.810110 | orchestrator | 2026-04-09 02:31:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:10.810146 | orchestrator | 2026-04-09 02:31:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:13.857188 | orchestrator | 2026-04-09 02:31:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:13.860071 | orchestrator | 2026-04-09 02:31:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:13.860168 | orchestrator | 2026-04-09 02:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:16.910098 | orchestrator | 2026-04-09 02:31:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:16.912720 | orchestrator | 2026-04-09 02:31:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:16.912808 | orchestrator | 2026-04-09 02:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:19.956254 | orchestrator | 2026-04-09 02:31:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:19.957463 | orchestrator | 2026-04-09 02:31:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:19.957510 | orchestrator | 2026-04-09 02:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:23.005178 | orchestrator | 2026-04-09 02:31:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:23.006856 | orchestrator | 2026-04-09 02:31:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:23.006915 | orchestrator | 2026-04-09 02:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:26.057030 | orchestrator | 2026-04-09 02:31:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:26.057886 | orchestrator | 2026-04-09 02:31:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:26.057954 | orchestrator | 2026-04-09 02:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:29.099963 | orchestrator | 2026-04-09 02:31:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:29.101444 | orchestrator | 2026-04-09 02:31:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:29.101475 | orchestrator | 2026-04-09 02:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:32.149070 | orchestrator | 2026-04-09 02:31:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:32.149234 | orchestrator | 2026-04-09 02:31:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:32.149248 | orchestrator | 2026-04-09 02:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:35.197924 | orchestrator | 2026-04-09 02:31:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:35.199741 | orchestrator | 2026-04-09 02:31:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:35.199851 | orchestrator | 2026-04-09 02:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:38.249945 | orchestrator | 2026-04-09 02:31:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:38.250088 | orchestrator | 2026-04-09 02:31:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:38.250102 | orchestrator | 2026-04-09 02:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:41.297763 | orchestrator | 2026-04-09 02:31:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:41.297954 | orchestrator | 2026-04-09 02:31:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:41.297983 | orchestrator | 2026-04-09 02:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:44.348204 | orchestrator | 2026-04-09 02:31:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:44.350624 | orchestrator | 2026-04-09 02:31:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:44.350661 | orchestrator | 2026-04-09 02:31:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:47.402680 | orchestrator | 2026-04-09 02:31:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:47.405862 | orchestrator | 2026-04-09 02:31:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:47.405994 | orchestrator | 2026-04-09 02:31:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:50.459491 | orchestrator | 2026-04-09 02:31:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:50.460518 | orchestrator | 2026-04-09 02:31:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:50.460560 | orchestrator | 2026-04-09 02:31:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:53.512921 | orchestrator | 2026-04-09 02:31:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:53.514662 | orchestrator | 2026-04-09 02:31:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:53.514716 | orchestrator | 2026-04-09 02:31:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:56.568278 | orchestrator | 2026-04-09 02:31:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:56.571755 | orchestrator | 2026-04-09 02:31:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:56.572279 | orchestrator | 2026-04-09 02:31:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:31:59.621890 | orchestrator | 2026-04-09 02:31:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:31:59.624715 | orchestrator | 2026-04-09 02:31:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:31:59.624767 | orchestrator | 2026-04-09 02:31:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:02.675233 | orchestrator | 2026-04-09 02:32:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:02.678183 | orchestrator | 2026-04-09 02:32:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:02.678287 | orchestrator | 2026-04-09 02:32:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:05.738355 | orchestrator | 2026-04-09 02:32:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:05.739981 | orchestrator | 2026-04-09 02:32:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:05.740043 | orchestrator | 2026-04-09 02:32:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:08.786631 | orchestrator | 2026-04-09 02:32:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:08.788099 | orchestrator | 2026-04-09 02:32:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:08.788162 | orchestrator | 2026-04-09 02:32:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:11.846521 | orchestrator | 2026-04-09 02:32:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:11.848514 | orchestrator | 2026-04-09 02:32:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:11.848574 | orchestrator | 2026-04-09 02:32:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:14.900621 | orchestrator | 2026-04-09 02:32:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:14.904781 | orchestrator | 2026-04-09 02:32:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:14.905021 | orchestrator | 2026-04-09 02:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:17.959321 | orchestrator | 2026-04-09 02:32:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:17.963162 | orchestrator | 2026-04-09 02:32:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:17.963225 | orchestrator | 2026-04-09 02:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:21.017796 | orchestrator | 2026-04-09 02:32:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:21.020113 | orchestrator | 2026-04-09 02:32:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:21.020202 | orchestrator | 2026-04-09 02:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:24.074729 | orchestrator | 2026-04-09 02:32:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:24.076105 | orchestrator | 2026-04-09 02:32:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:24.076153 | orchestrator | 2026-04-09 02:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:27.131860 | orchestrator | 2026-04-09 02:32:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:27.133721 | orchestrator | 2026-04-09 02:32:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:27.133832 | orchestrator | 2026-04-09 02:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:30.183482 | orchestrator | 2026-04-09 02:32:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:30.185924 | orchestrator | 2026-04-09 02:32:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:30.185997 | orchestrator | 2026-04-09 02:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:33.235278 | orchestrator | 2026-04-09 02:32:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:33.236886 | orchestrator | 2026-04-09 02:32:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:33.236934 | orchestrator | 2026-04-09 02:32:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:36.284860 | orchestrator | 2026-04-09 02:32:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:36.286454 | orchestrator | 2026-04-09 02:32:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:36.286509 | orchestrator | 2026-04-09 02:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:39.337217 | orchestrator | 2026-04-09 02:32:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:39.338945 | orchestrator | 2026-04-09 02:32:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:39.338985 | orchestrator | 2026-04-09 02:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:42.391507 | orchestrator | 2026-04-09 02:32:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:42.393042 | orchestrator | 2026-04-09 02:32:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:42.393092 | orchestrator | 2026-04-09 02:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:45.441794 | orchestrator | 2026-04-09 02:32:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:45.443106 | orchestrator | 2026-04-09 02:32:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:45.443157 | orchestrator | 2026-04-09 02:32:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:48.493560 | orchestrator | 2026-04-09 02:32:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:48.495994 | orchestrator | 2026-04-09 02:32:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:48.496349 | orchestrator | 2026-04-09 02:32:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:51.546741 | orchestrator | 2026-04-09 02:32:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:51.548104 | orchestrator | 2026-04-09 02:32:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:51.548206 | orchestrator | 2026-04-09 02:32:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:54.595483 | orchestrator | 2026-04-09 02:32:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:54.597201 | orchestrator | 2026-04-09 02:32:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:54.597277 | orchestrator | 2026-04-09 02:32:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:32:57.649321 | orchestrator | 2026-04-09 02:32:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:32:57.651403 | orchestrator | 2026-04-09 02:32:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:32:57.651506 | orchestrator | 2026-04-09 02:32:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:00.698870 | orchestrator | 2026-04-09 02:33:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:00.701425 | orchestrator | 2026-04-09 02:33:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:00.701491 | orchestrator | 2026-04-09 02:33:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:03.751496 | orchestrator | 2026-04-09 02:33:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:03.753315 | orchestrator | 2026-04-09 02:33:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:03.753566 | orchestrator | 2026-04-09 02:33:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:06.803066 | orchestrator | 2026-04-09 02:33:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:06.804820 | orchestrator | 2026-04-09 02:33:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:06.804978 | orchestrator | 2026-04-09 02:33:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:09.851826 | orchestrator | 2026-04-09 02:33:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:09.853506 | orchestrator | 2026-04-09 02:33:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:09.853565 | orchestrator | 2026-04-09 02:33:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:12.901739 | orchestrator | 2026-04-09 02:33:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:12.902915 | orchestrator | 2026-04-09 02:33:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:12.902936 | orchestrator | 2026-04-09 02:33:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:15.949186 | orchestrator | 2026-04-09 02:33:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:15.950686 | orchestrator | 2026-04-09 02:33:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:15.950811 | orchestrator | 2026-04-09 02:33:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:19.003783 | orchestrator | 2026-04-09 02:33:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:19.004553 | orchestrator | 2026-04-09 02:33:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:19.004616 | orchestrator | 2026-04-09 02:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:22.061432 | orchestrator | 2026-04-09 02:33:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:22.064153 | orchestrator | 2026-04-09 02:33:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:22.064248 | orchestrator | 2026-04-09 02:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:25.107118 | orchestrator | 2026-04-09 02:33:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:25.111576 | orchestrator | 2026-04-09 02:33:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:25.111768 | orchestrator | 2026-04-09 02:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:28.171484 | orchestrator | 2026-04-09 02:33:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:28.174107 | orchestrator | 2026-04-09 02:33:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:28.174160 | orchestrator | 2026-04-09 02:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:31.218680 | orchestrator | 2026-04-09 02:33:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:33:31.220990 | orchestrator | 2026-04-09 02:33:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:33:31.221049 | orchestrator | 2026-04-09 02:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:33:34.275621 | orchestrator | 2026-04-09 02:33:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:34.381514 | orchestrator | 2026-04-09 02:35:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:34.381611 | orchestrator | 2026-04-09 02:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:37.429168 | orchestrator | 2026-04-09 02:35:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:37.431414 | orchestrator | 2026-04-09 02:35:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:37.431493 | orchestrator | 2026-04-09 02:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:40.481054 | orchestrator | 2026-04-09 02:35:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:40.482230 | orchestrator | 2026-04-09 02:35:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:40.482304 | orchestrator | 2026-04-09 02:35:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:43.529825 | orchestrator | 2026-04-09 02:35:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:43.531969 | orchestrator | 2026-04-09 02:35:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:43.532505 | orchestrator | 2026-04-09 02:35:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:46.572955 | orchestrator | 2026-04-09 02:35:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:46.575232 | orchestrator | 2026-04-09 02:35:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:46.575365 | orchestrator | 2026-04-09 02:35:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:49.622415 | orchestrator | 2026-04-09 02:35:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:49.624860 | orchestrator | 2026-04-09 02:35:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:49.624946 | orchestrator | 2026-04-09 02:35:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:52.674076 | orchestrator | 2026-04-09 02:35:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:52.676391 | orchestrator | 2026-04-09 02:35:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:52.676475 | orchestrator | 2026-04-09 02:35:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:55.723363 | orchestrator | 2026-04-09 02:35:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:55.725636 | orchestrator | 2026-04-09 02:35:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:55.725760 | orchestrator | 2026-04-09 02:35:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:35:58.778166 | orchestrator | 2026-04-09 02:35:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:35:58.779583 | orchestrator | 2026-04-09 02:35:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:35:58.779646 | orchestrator | 2026-04-09 02:35:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:01.822427 | orchestrator | 2026-04-09 02:36:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:01.824125 | orchestrator | 2026-04-09 02:36:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:01.824201 | orchestrator | 2026-04-09 02:36:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:04.870583 | orchestrator | 2026-04-09 02:36:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:04.870827 | orchestrator | 2026-04-09 02:36:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:04.870945 | orchestrator | 2026-04-09 02:36:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:07.920804 | orchestrator | 2026-04-09 02:36:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:07.922492 | orchestrator | 2026-04-09 02:36:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:07.922639 | orchestrator | 2026-04-09 02:36:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:10.971656 | orchestrator | 2026-04-09 02:36:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:10.974159 | orchestrator | 2026-04-09 02:36:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:10.974225 | orchestrator | 2026-04-09 02:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:14.027281 | orchestrator | 2026-04-09 02:36:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:14.031743 | orchestrator | 2026-04-09 02:36:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:14.032006 | orchestrator | 2026-04-09 02:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:17.072477 | orchestrator | 2026-04-09 02:36:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:17.074854 | orchestrator | 2026-04-09 02:36:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:17.074898 | orchestrator | 2026-04-09 02:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:20.114270 | orchestrator | 2026-04-09 02:36:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:20.116901 | orchestrator | 2026-04-09 02:36:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:20.116940 | orchestrator | 2026-04-09 02:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:23.166069 | orchestrator | 2026-04-09 02:36:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:23.168281 | orchestrator | 2026-04-09 02:36:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:23.168454 | orchestrator | 2026-04-09 02:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:26.217883 | orchestrator | 2026-04-09 02:36:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:26.219145 | orchestrator | 2026-04-09 02:36:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:26.219296 | orchestrator | 2026-04-09 02:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:29.258671 | orchestrator | 2026-04-09 02:36:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:29.263293 | orchestrator | 2026-04-09 02:36:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:29.263378 | orchestrator | 2026-04-09 02:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:32.305382 | orchestrator | 2026-04-09 02:36:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:32.306976 | orchestrator | 2026-04-09 02:36:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:32.307028 | orchestrator | 2026-04-09 02:36:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:35.352814 | orchestrator | 2026-04-09 02:36:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:35.355017 | orchestrator | 2026-04-09 02:36:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:35.355169 | orchestrator | 2026-04-09 02:36:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:38.400877 | orchestrator | 2026-04-09 02:36:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:38.403357 | orchestrator | 2026-04-09 02:36:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:38.403423 | orchestrator | 2026-04-09 02:36:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:41.448874 | orchestrator | 2026-04-09 02:36:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:41.451837 | orchestrator | 2026-04-09 02:36:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:41.451921 | orchestrator | 2026-04-09 02:36:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:44.500861 | orchestrator | 2026-04-09 02:36:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:44.502370 | orchestrator | 2026-04-09 02:36:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:44.502420 | orchestrator | 2026-04-09 02:36:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:47.549515 | orchestrator | 2026-04-09 02:36:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:47.553148 | orchestrator | 2026-04-09 02:36:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:47.553284 | orchestrator | 2026-04-09 02:36:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:50.595987 | orchestrator | 2026-04-09 02:36:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:50.598776 | orchestrator | 2026-04-09 02:36:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:50.598836 | orchestrator | 2026-04-09 02:36:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:53.641420 | orchestrator | 2026-04-09 02:36:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:53.644157 | orchestrator | 2026-04-09 02:36:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:53.644328 | orchestrator | 2026-04-09 02:36:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:56.690200 | orchestrator | 2026-04-09 02:36:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:56.691507 | orchestrator | 2026-04-09 02:36:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:56.691559 | orchestrator | 2026-04-09 02:36:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:36:59.738854 | orchestrator | 2026-04-09 02:36:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:36:59.740716 | orchestrator | 2026-04-09 02:36:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:36:59.740770 | orchestrator | 2026-04-09 02:36:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:02.785895 | orchestrator | 2026-04-09 02:37:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:02.788087 | orchestrator | 2026-04-09 02:37:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:02.788287 | orchestrator | 2026-04-09 02:37:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:05.828190 | orchestrator | 2026-04-09 02:37:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:05.829826 | orchestrator | 2026-04-09 02:37:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:05.829897 | orchestrator | 2026-04-09 02:37:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:08.875908 | orchestrator | 2026-04-09 02:37:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:08.879045 | orchestrator | 2026-04-09 02:37:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:08.879109 | orchestrator | 2026-04-09 02:37:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:11.922731 | orchestrator | 2026-04-09 02:37:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:11.924678 | orchestrator | 2026-04-09 02:37:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:11.924796 | orchestrator | 2026-04-09 02:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:14.968856 | orchestrator | 2026-04-09 02:37:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:14.969758 | orchestrator | 2026-04-09 02:37:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:14.969809 | orchestrator | 2026-04-09 02:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:18.020644 | orchestrator | 2026-04-09 02:37:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:18.022367 | orchestrator | 2026-04-09 02:37:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:18.022428 | orchestrator | 2026-04-09 02:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:21.067910 | orchestrator | 2026-04-09 02:37:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:21.069670 | orchestrator | 2026-04-09 02:37:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:21.069710 | orchestrator | 2026-04-09 02:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:24.113059 | orchestrator | 2026-04-09 02:37:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:24.116165 | orchestrator | 2026-04-09 02:37:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:24.116253 | orchestrator | 2026-04-09 02:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:27.168380 | orchestrator | 2026-04-09 02:37:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:27.169907 | orchestrator | 2026-04-09 02:37:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:27.169956 | orchestrator | 2026-04-09 02:37:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:30.218470 | orchestrator | 2026-04-09 02:37:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:30.220023 | orchestrator | 2026-04-09 02:37:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:30.220072 | orchestrator | 2026-04-09 02:37:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:33.263331 | orchestrator | 2026-04-09 02:37:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:33.265134 | orchestrator | 2026-04-09 02:37:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:33.265176 | orchestrator | 2026-04-09 02:37:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:36.310709 | orchestrator | 2026-04-09 02:37:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:36.311958 | orchestrator | 2026-04-09 02:37:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:36.312007 | orchestrator | 2026-04-09 02:37:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:39.359653 | orchestrator | 2026-04-09 02:37:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:39.361485 | orchestrator | 2026-04-09 02:37:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:39.361546 | orchestrator | 2026-04-09 02:37:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:42.410816 | orchestrator | 2026-04-09 02:37:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:42.413420 | orchestrator | 2026-04-09 02:37:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:42.413496 | orchestrator | 2026-04-09 02:37:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:45.456326 | orchestrator | 2026-04-09 02:37:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:45.458238 | orchestrator | 2026-04-09 02:37:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:45.458284 | orchestrator | 2026-04-09 02:37:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:48.495238 | orchestrator | 2026-04-09 02:37:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:48.497260 | orchestrator | 2026-04-09 02:37:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:48.497303 | orchestrator | 2026-04-09 02:37:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:51.546702 | orchestrator | 2026-04-09 02:37:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:51.549372 | orchestrator | 2026-04-09 02:37:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:51.549431 | orchestrator | 2026-04-09 02:37:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:54.597113 | orchestrator | 2026-04-09 02:37:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:54.605377 | orchestrator | 2026-04-09 02:37:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:54.605531 | orchestrator | 2026-04-09 02:37:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:37:57.647091 | orchestrator | 2026-04-09 02:37:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:37:57.648797 | orchestrator | 2026-04-09 02:37:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:37:57.648953 | orchestrator | 2026-04-09 02:37:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:00.692637 | orchestrator | 2026-04-09 02:38:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:00.694231 | orchestrator | 2026-04-09 02:38:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:00.694298 | orchestrator | 2026-04-09 02:38:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:03.741853 | orchestrator | 2026-04-09 02:38:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:03.743905 | orchestrator | 2026-04-09 02:38:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:03.743982 | orchestrator | 2026-04-09 02:38:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:06.783124 | orchestrator | 2026-04-09 02:38:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:06.783953 | orchestrator | 2026-04-09 02:38:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:06.783980 | orchestrator | 2026-04-09 02:38:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:09.829642 | orchestrator | 2026-04-09 02:38:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:09.831669 | orchestrator | 2026-04-09 02:38:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:09.831712 | orchestrator | 2026-04-09 02:38:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:12.879066 | orchestrator | 2026-04-09 02:38:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:12.881431 | orchestrator | 2026-04-09 02:38:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:12.881595 | orchestrator | 2026-04-09 02:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:15.928586 | orchestrator | 2026-04-09 02:38:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:15.930641 | orchestrator | 2026-04-09 02:38:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:15.930706 | orchestrator | 2026-04-09 02:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:18.975683 | orchestrator | 2026-04-09 02:38:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:18.976557 | orchestrator | 2026-04-09 02:38:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:18.976586 | orchestrator | 2026-04-09 02:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:22.018407 | orchestrator | 2026-04-09 02:38:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:22.019503 | orchestrator | 2026-04-09 02:38:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:22.019575 | orchestrator | 2026-04-09 02:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:25.064847 | orchestrator | 2026-04-09 02:38:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:25.066655 | orchestrator | 2026-04-09 02:38:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:25.066712 | orchestrator | 2026-04-09 02:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:28.114131 | orchestrator | 2026-04-09 02:38:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:28.115958 | orchestrator | 2026-04-09 02:38:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:28.116049 | orchestrator | 2026-04-09 02:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:31.161872 | orchestrator | 2026-04-09 02:38:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:31.164689 | orchestrator | 2026-04-09 02:38:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:31.164749 | orchestrator | 2026-04-09 02:38:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:34.210568 | orchestrator | 2026-04-09 02:38:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:34.214325 | orchestrator | 2026-04-09 02:38:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:34.214436 | orchestrator | 2026-04-09 02:38:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:37.254260 | orchestrator | 2026-04-09 02:38:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:37.256471 | orchestrator | 2026-04-09 02:38:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:37.256516 | orchestrator | 2026-04-09 02:38:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:40.298345 | orchestrator | 2026-04-09 02:38:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:40.301411 | orchestrator | 2026-04-09 02:38:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:40.301482 | orchestrator | 2026-04-09 02:38:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:43.346266 | orchestrator | 2026-04-09 02:38:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:43.348958 | orchestrator | 2026-04-09 02:38:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:43.349019 | orchestrator | 2026-04-09 02:38:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:46.396076 | orchestrator | 2026-04-09 02:38:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:46.397534 | orchestrator | 2026-04-09 02:38:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:46.397597 | orchestrator | 2026-04-09 02:38:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:49.445803 | orchestrator | 2026-04-09 02:38:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:49.447772 | orchestrator | 2026-04-09 02:38:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:49.448193 | orchestrator | 2026-04-09 02:38:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:52.493753 | orchestrator | 2026-04-09 02:38:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:52.496331 | orchestrator | 2026-04-09 02:38:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:52.496389 | orchestrator | 2026-04-09 02:38:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:55.548010 | orchestrator | 2026-04-09 02:38:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:55.548243 | orchestrator | 2026-04-09 02:38:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:55.548495 | orchestrator | 2026-04-09 02:38:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:38:58.591342 | orchestrator | 2026-04-09 02:38:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:38:58.592526 | orchestrator | 2026-04-09 02:38:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:38:58.592590 | orchestrator | 2026-04-09 02:38:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:01.643267 | orchestrator | 2026-04-09 02:39:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:01.644567 | orchestrator | 2026-04-09 02:39:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:01.644610 | orchestrator | 2026-04-09 02:39:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:04.689595 | orchestrator | 2026-04-09 02:39:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:04.691529 | orchestrator | 2026-04-09 02:39:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:04.691664 | orchestrator | 2026-04-09 02:39:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:07.740269 | orchestrator | 2026-04-09 02:39:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:07.741507 | orchestrator | 2026-04-09 02:39:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:07.741572 | orchestrator | 2026-04-09 02:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:10.787277 | orchestrator | 2026-04-09 02:39:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:10.789415 | orchestrator | 2026-04-09 02:39:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:10.789519 | orchestrator | 2026-04-09 02:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:13.839709 | orchestrator | 2026-04-09 02:39:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:13.843347 | orchestrator | 2026-04-09 02:39:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:13.843428 | orchestrator | 2026-04-09 02:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:16.889658 | orchestrator | 2026-04-09 02:39:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:16.892090 | orchestrator | 2026-04-09 02:39:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:16.892204 | orchestrator | 2026-04-09 02:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:19.940885 | orchestrator | 2026-04-09 02:39:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:19.943494 | orchestrator | 2026-04-09 02:39:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:19.943585 | orchestrator | 2026-04-09 02:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:22.993552 | orchestrator | 2026-04-09 02:39:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:22.995749 | orchestrator | 2026-04-09 02:39:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:22.995886 | orchestrator | 2026-04-09 02:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:26.043812 | orchestrator | 2026-04-09 02:39:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:26.045453 | orchestrator | 2026-04-09 02:39:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:26.045525 | orchestrator | 2026-04-09 02:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:29.097210 | orchestrator | 2026-04-09 02:39:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:29.098818 | orchestrator | 2026-04-09 02:39:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:29.098871 | orchestrator | 2026-04-09 02:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:32.145668 | orchestrator | 2026-04-09 02:39:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:32.147223 | orchestrator | 2026-04-09 02:39:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:32.147288 | orchestrator | 2026-04-09 02:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:35.187566 | orchestrator | 2026-04-09 02:39:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:35.189107 | orchestrator | 2026-04-09 02:39:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:35.189254 | orchestrator | 2026-04-09 02:39:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:38.235809 | orchestrator | 2026-04-09 02:39:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:38.237447 | orchestrator | 2026-04-09 02:39:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:38.237487 | orchestrator | 2026-04-09 02:39:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:41.276921 | orchestrator | 2026-04-09 02:39:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:41.279237 | orchestrator | 2026-04-09 02:39:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:41.279392 | orchestrator | 2026-04-09 02:39:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:44.328753 | orchestrator | 2026-04-09 02:39:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:44.330491 | orchestrator | 2026-04-09 02:39:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:44.330635 | orchestrator | 2026-04-09 02:39:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:47.379741 | orchestrator | 2026-04-09 02:39:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:47.380866 | orchestrator | 2026-04-09 02:39:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:47.380983 | orchestrator | 2026-04-09 02:39:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:50.426824 | orchestrator | 2026-04-09 02:39:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:50.429416 | orchestrator | 2026-04-09 02:39:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:50.429472 | orchestrator | 2026-04-09 02:39:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:53.479854 | orchestrator | 2026-04-09 02:39:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:53.482091 | orchestrator | 2026-04-09 02:39:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:53.482194 | orchestrator | 2026-04-09 02:39:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:56.529503 | orchestrator | 2026-04-09 02:39:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:56.531772 | orchestrator | 2026-04-09 02:39:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:56.531823 | orchestrator | 2026-04-09 02:39:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:39:59.590935 | orchestrator | 2026-04-09 02:39:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:39:59.592738 | orchestrator | 2026-04-09 02:39:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:39:59.592801 | orchestrator | 2026-04-09 02:39:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:02.644240 | orchestrator | 2026-04-09 02:40:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:02.645978 | orchestrator | 2026-04-09 02:40:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:02.646103 | orchestrator | 2026-04-09 02:40:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:05.697191 | orchestrator | 2026-04-09 02:40:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:05.703998 | orchestrator | 2026-04-09 02:40:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:05.704075 | orchestrator | 2026-04-09 02:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:08.743857 | orchestrator | 2026-04-09 02:40:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:08.745425 | orchestrator | 2026-04-09 02:40:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:08.745572 | orchestrator | 2026-04-09 02:40:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:11.792476 | orchestrator | 2026-04-09 02:40:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:11.793840 | orchestrator | 2026-04-09 02:40:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:11.793870 | orchestrator | 2026-04-09 02:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:14.848070 | orchestrator | 2026-04-09 02:40:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:14.850789 | orchestrator | 2026-04-09 02:40:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:14.850830 | orchestrator | 2026-04-09 02:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:17.898805 | orchestrator | 2026-04-09 02:40:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:17.899598 | orchestrator | 2026-04-09 02:40:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:17.899646 | orchestrator | 2026-04-09 02:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:20.946965 | orchestrator | 2026-04-09 02:40:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:20.949814 | orchestrator | 2026-04-09 02:40:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:20.949922 | orchestrator | 2026-04-09 02:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:23.996197 | orchestrator | 2026-04-09 02:40:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:24.000938 | orchestrator | 2026-04-09 02:40:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:24.001033 | orchestrator | 2026-04-09 02:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:27.051558 | orchestrator | 2026-04-09 02:40:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:27.052590 | orchestrator | 2026-04-09 02:40:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:27.052646 | orchestrator | 2026-04-09 02:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:30.103016 | orchestrator | 2026-04-09 02:40:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:30.103958 | orchestrator | 2026-04-09 02:40:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:30.104000 | orchestrator | 2026-04-09 02:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:33.151437 | orchestrator | 2026-04-09 02:40:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:33.152495 | orchestrator | 2026-04-09 02:40:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:33.152556 | orchestrator | 2026-04-09 02:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:36.198417 | orchestrator | 2026-04-09 02:40:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:36.199842 | orchestrator | 2026-04-09 02:40:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:36.199885 | orchestrator | 2026-04-09 02:40:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:39.246511 | orchestrator | 2026-04-09 02:40:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:39.247493 | orchestrator | 2026-04-09 02:40:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:39.247605 | orchestrator | 2026-04-09 02:40:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:42.285563 | orchestrator | 2026-04-09 02:40:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:42.287341 | orchestrator | 2026-04-09 02:40:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:42.287411 | orchestrator | 2026-04-09 02:40:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:45.337563 | orchestrator | 2026-04-09 02:40:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:45.340865 | orchestrator | 2026-04-09 02:40:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:45.340921 | orchestrator | 2026-04-09 02:40:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:48.394943 | orchestrator | 2026-04-09 02:40:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:48.395531 | orchestrator | 2026-04-09 02:40:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:48.395560 | orchestrator | 2026-04-09 02:40:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:51.442729 | orchestrator | 2026-04-09 02:40:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:51.444297 | orchestrator | 2026-04-09 02:40:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:51.444375 | orchestrator | 2026-04-09 02:40:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:54.489994 | orchestrator | 2026-04-09 02:40:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:54.491325 | orchestrator | 2026-04-09 02:40:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:54.491397 | orchestrator | 2026-04-09 02:40:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:40:57.535488 | orchestrator | 2026-04-09 02:40:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:40:57.536534 | orchestrator | 2026-04-09 02:40:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:40:57.536587 | orchestrator | 2026-04-09 02:40:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:00.583356 | orchestrator | 2026-04-09 02:41:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:00.584608 | orchestrator | 2026-04-09 02:41:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:00.585257 | orchestrator | 2026-04-09 02:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:03.637881 | orchestrator | 2026-04-09 02:41:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:03.639408 | orchestrator | 2026-04-09 02:41:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:03.639481 | orchestrator | 2026-04-09 02:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:06.691839 | orchestrator | 2026-04-09 02:41:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:06.693965 | orchestrator | 2026-04-09 02:41:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:06.694880 | orchestrator | 2026-04-09 02:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:09.744686 | orchestrator | 2026-04-09 02:41:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:09.745812 | orchestrator | 2026-04-09 02:41:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:09.745865 | orchestrator | 2026-04-09 02:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:12.792198 | orchestrator | 2026-04-09 02:41:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:12.793408 | orchestrator | 2026-04-09 02:41:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:12.793458 | orchestrator | 2026-04-09 02:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:15.845464 | orchestrator | 2026-04-09 02:41:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:15.846212 | orchestrator | 2026-04-09 02:41:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:15.846249 | orchestrator | 2026-04-09 02:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:18.891494 | orchestrator | 2026-04-09 02:41:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:18.893316 | orchestrator | 2026-04-09 02:41:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:18.893386 | orchestrator | 2026-04-09 02:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:21.939568 | orchestrator | 2026-04-09 02:41:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:21.941706 | orchestrator | 2026-04-09 02:41:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:21.941790 | orchestrator | 2026-04-09 02:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:24.990292 | orchestrator | 2026-04-09 02:41:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:24.992723 | orchestrator | 2026-04-09 02:41:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:24.992808 | orchestrator | 2026-04-09 02:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:28.038342 | orchestrator | 2026-04-09 02:41:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:28.039695 | orchestrator | 2026-04-09 02:41:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:28.039758 | orchestrator | 2026-04-09 02:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:31.080279 | orchestrator | 2026-04-09 02:41:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:31.080603 | orchestrator | 2026-04-09 02:41:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:31.080627 | orchestrator | 2026-04-09 02:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:34.127542 | orchestrator | 2026-04-09 02:41:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:34.129801 | orchestrator | 2026-04-09 02:41:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:34.129856 | orchestrator | 2026-04-09 02:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:37.179330 | orchestrator | 2026-04-09 02:41:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:37.180526 | orchestrator | 2026-04-09 02:41:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:37.180573 | orchestrator | 2026-04-09 02:41:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:40.221979 | orchestrator | 2026-04-09 02:41:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:40.223414 | orchestrator | 2026-04-09 02:41:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:40.223456 | orchestrator | 2026-04-09 02:41:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:43.270910 | orchestrator | 2026-04-09 02:41:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:43.272775 | orchestrator | 2026-04-09 02:41:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:43.272868 | orchestrator | 2026-04-09 02:41:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:46.330275 | orchestrator | 2026-04-09 02:41:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:46.332039 | orchestrator | 2026-04-09 02:41:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:46.332249 | orchestrator | 2026-04-09 02:41:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:49.381509 | orchestrator | 2026-04-09 02:41:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:49.383253 | orchestrator | 2026-04-09 02:41:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:49.383323 | orchestrator | 2026-04-09 02:41:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:52.428134 | orchestrator | 2026-04-09 02:41:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:52.430440 | orchestrator | 2026-04-09 02:41:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:52.430597 | orchestrator | 2026-04-09 02:41:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:55.475522 | orchestrator | 2026-04-09 02:41:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:55.479750 | orchestrator | 2026-04-09 02:41:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:55.479834 | orchestrator | 2026-04-09 02:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:41:58.526825 | orchestrator | 2026-04-09 02:41:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:41:58.528538 | orchestrator | 2026-04-09 02:41:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:41:58.528718 | orchestrator | 2026-04-09 02:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:01.578277 | orchestrator | 2026-04-09 02:42:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:01.580973 | orchestrator | 2026-04-09 02:42:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:01.581070 | orchestrator | 2026-04-09 02:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:04.624976 | orchestrator | 2026-04-09 02:42:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:04.627346 | orchestrator | 2026-04-09 02:42:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:04.627482 | orchestrator | 2026-04-09 02:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:07.676585 | orchestrator | 2026-04-09 02:42:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:07.678297 | orchestrator | 2026-04-09 02:42:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:07.678394 | orchestrator | 2026-04-09 02:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:10.728075 | orchestrator | 2026-04-09 02:42:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:10.728609 | orchestrator | 2026-04-09 02:42:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:10.728642 | orchestrator | 2026-04-09 02:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:13.778154 | orchestrator | 2026-04-09 02:42:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:13.780195 | orchestrator | 2026-04-09 02:42:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:13.780261 | orchestrator | 2026-04-09 02:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:16.829277 | orchestrator | 2026-04-09 02:42:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:16.830763 | orchestrator | 2026-04-09 02:42:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:16.830829 | orchestrator | 2026-04-09 02:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:19.879315 | orchestrator | 2026-04-09 02:42:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:19.881528 | orchestrator | 2026-04-09 02:42:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:19.881620 | orchestrator | 2026-04-09 02:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:22.932433 | orchestrator | 2026-04-09 02:42:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:22.934104 | orchestrator | 2026-04-09 02:42:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:22.934144 | orchestrator | 2026-04-09 02:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:25.982368 | orchestrator | 2026-04-09 02:42:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:25.983417 | orchestrator | 2026-04-09 02:42:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:25.983479 | orchestrator | 2026-04-09 02:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:29.035216 | orchestrator | 2026-04-09 02:42:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:29.037266 | orchestrator | 2026-04-09 02:42:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:29.037324 | orchestrator | 2026-04-09 02:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:32.082826 | orchestrator | 2026-04-09 02:42:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:32.084530 | orchestrator | 2026-04-09 02:42:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:32.084604 | orchestrator | 2026-04-09 02:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:35.135665 | orchestrator | 2026-04-09 02:42:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:35.137667 | orchestrator | 2026-04-09 02:42:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:35.137719 | orchestrator | 2026-04-09 02:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:38.188388 | orchestrator | 2026-04-09 02:42:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:38.189767 | orchestrator | 2026-04-09 02:42:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:38.189877 | orchestrator | 2026-04-09 02:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:41.244090 | orchestrator | 2026-04-09 02:42:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:41.248573 | orchestrator | 2026-04-09 02:42:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:41.248948 | orchestrator | 2026-04-09 02:42:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:44.299509 | orchestrator | 2026-04-09 02:42:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:44.300084 | orchestrator | 2026-04-09 02:42:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:44.300250 | orchestrator | 2026-04-09 02:42:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:47.338487 | orchestrator | 2026-04-09 02:42:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:47.339392 | orchestrator | 2026-04-09 02:42:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:47.339440 | orchestrator | 2026-04-09 02:42:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:50.383443 | orchestrator | 2026-04-09 02:42:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:50.385176 | orchestrator | 2026-04-09 02:42:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:50.385228 | orchestrator | 2026-04-09 02:42:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:53.431789 | orchestrator | 2026-04-09 02:42:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:53.433398 | orchestrator | 2026-04-09 02:42:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:53.433456 | orchestrator | 2026-04-09 02:42:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:56.496966 | orchestrator | 2026-04-09 02:42:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:56.498677 | orchestrator | 2026-04-09 02:42:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:56.498725 | orchestrator | 2026-04-09 02:42:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:42:59.532798 | orchestrator | 2026-04-09 02:42:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:42:59.533441 | orchestrator | 2026-04-09 02:42:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:42:59.533505 | orchestrator | 2026-04-09 02:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:02.580009 | orchestrator | 2026-04-09 02:43:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:02.581457 | orchestrator | 2026-04-09 02:43:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:02.581632 | orchestrator | 2026-04-09 02:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:05.634335 | orchestrator | 2026-04-09 02:43:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:05.635782 | orchestrator | 2026-04-09 02:43:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:05.635846 | orchestrator | 2026-04-09 02:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:08.683443 | orchestrator | 2026-04-09 02:43:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:08.685644 | orchestrator | 2026-04-09 02:43:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:08.685698 | orchestrator | 2026-04-09 02:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:11.730878 | orchestrator | 2026-04-09 02:43:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:11.731797 | orchestrator | 2026-04-09 02:43:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:11.731996 | orchestrator | 2026-04-09 02:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:14.777649 | orchestrator | 2026-04-09 02:43:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:14.779786 | orchestrator | 2026-04-09 02:43:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:14.779845 | orchestrator | 2026-04-09 02:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:17.823390 | orchestrator | 2026-04-09 02:43:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:17.823677 | orchestrator | 2026-04-09 02:43:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:17.823707 | orchestrator | 2026-04-09 02:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:20.874385 | orchestrator | 2026-04-09 02:43:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:20.875188 | orchestrator | 2026-04-09 02:43:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:20.875979 | orchestrator | 2026-04-09 02:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:23.929605 | orchestrator | 2026-04-09 02:43:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:23.932043 | orchestrator | 2026-04-09 02:43:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:23.932181 | orchestrator | 2026-04-09 02:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:26.976392 | orchestrator | 2026-04-09 02:43:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:26.979105 | orchestrator | 2026-04-09 02:43:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:26.979219 | orchestrator | 2026-04-09 02:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:30.029083 | orchestrator | 2026-04-09 02:43:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:30.031492 | orchestrator | 2026-04-09 02:43:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:30.031556 | orchestrator | 2026-04-09 02:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:33.069715 | orchestrator | 2026-04-09 02:43:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:33.071007 | orchestrator | 2026-04-09 02:43:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:33.071100 | orchestrator | 2026-04-09 02:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:36.113685 | orchestrator | 2026-04-09 02:43:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:36.114782 | orchestrator | 2026-04-09 02:43:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:36.114892 | orchestrator | 2026-04-09 02:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:39.168090 | orchestrator | 2026-04-09 02:43:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:39.170829 | orchestrator | 2026-04-09 02:43:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:39.170928 | orchestrator | 2026-04-09 02:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:42.229723 | orchestrator | 2026-04-09 02:43:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:42.231669 | orchestrator | 2026-04-09 02:43:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:42.231712 | orchestrator | 2026-04-09 02:43:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:45.271101 | orchestrator | 2026-04-09 02:43:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:45.272241 | orchestrator | 2026-04-09 02:43:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:45.272294 | orchestrator | 2026-04-09 02:43:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:48.324714 | orchestrator | 2026-04-09 02:43:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:48.327243 | orchestrator | 2026-04-09 02:43:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:48.327348 | orchestrator | 2026-04-09 02:43:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:51.366958 | orchestrator | 2026-04-09 02:43:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:51.367674 | orchestrator | 2026-04-09 02:43:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:51.367703 | orchestrator | 2026-04-09 02:43:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:54.407747 | orchestrator | 2026-04-09 02:43:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:54.410488 | orchestrator | 2026-04-09 02:43:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:54.410559 | orchestrator | 2026-04-09 02:43:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:43:57.463910 | orchestrator | 2026-04-09 02:43:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:43:57.465693 | orchestrator | 2026-04-09 02:43:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:43:57.465762 | orchestrator | 2026-04-09 02:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:00.516524 | orchestrator | 2026-04-09 02:44:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:00.517414 | orchestrator | 2026-04-09 02:44:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:00.517449 | orchestrator | 2026-04-09 02:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:03.566676 | orchestrator | 2026-04-09 02:44:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:03.568089 | orchestrator | 2026-04-09 02:44:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:03.568191 | orchestrator | 2026-04-09 02:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:06.621207 | orchestrator | 2026-04-09 02:44:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:06.622501 | orchestrator | 2026-04-09 02:44:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:06.622603 | orchestrator | 2026-04-09 02:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:09.671335 | orchestrator | 2026-04-09 02:44:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:09.673002 | orchestrator | 2026-04-09 02:44:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:09.673054 | orchestrator | 2026-04-09 02:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:12.721873 | orchestrator | 2026-04-09 02:44:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:12.723886 | orchestrator | 2026-04-09 02:44:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:12.723982 | orchestrator | 2026-04-09 02:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:15.771508 | orchestrator | 2026-04-09 02:44:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:15.772995 | orchestrator | 2026-04-09 02:44:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:15.773088 | orchestrator | 2026-04-09 02:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:18.820821 | orchestrator | 2026-04-09 02:44:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:18.823624 | orchestrator | 2026-04-09 02:44:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:18.823684 | orchestrator | 2026-04-09 02:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:21.870885 | orchestrator | 2026-04-09 02:44:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:21.873792 | orchestrator | 2026-04-09 02:44:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:21.873855 | orchestrator | 2026-04-09 02:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:24.920395 | orchestrator | 2026-04-09 02:44:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:24.921848 | orchestrator | 2026-04-09 02:44:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:24.922009 | orchestrator | 2026-04-09 02:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:27.971928 | orchestrator | 2026-04-09 02:44:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:27.973018 | orchestrator | 2026-04-09 02:44:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:27.973118 | orchestrator | 2026-04-09 02:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:31.023798 | orchestrator | 2026-04-09 02:44:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:31.024271 | orchestrator | 2026-04-09 02:44:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:31.024442 | orchestrator | 2026-04-09 02:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:34.076804 | orchestrator | 2026-04-09 02:44:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:34.079538 | orchestrator | 2026-04-09 02:44:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:34.079602 | orchestrator | 2026-04-09 02:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:37.131177 | orchestrator | 2026-04-09 02:44:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:37.133716 | orchestrator | 2026-04-09 02:44:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:37.133766 | orchestrator | 2026-04-09 02:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:40.185424 | orchestrator | 2026-04-09 02:44:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:40.187429 | orchestrator | 2026-04-09 02:44:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:40.187593 | orchestrator | 2026-04-09 02:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:43.237701 | orchestrator | 2026-04-09 02:44:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:43.239369 | orchestrator | 2026-04-09 02:44:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:43.239712 | orchestrator | 2026-04-09 02:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:46.287654 | orchestrator | 2026-04-09 02:44:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:46.289264 | orchestrator | 2026-04-09 02:44:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:46.289320 | orchestrator | 2026-04-09 02:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:49.334756 | orchestrator | 2026-04-09 02:44:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:49.336640 | orchestrator | 2026-04-09 02:44:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:49.336698 | orchestrator | 2026-04-09 02:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:52.387106 | orchestrator | 2026-04-09 02:44:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:52.390405 | orchestrator | 2026-04-09 02:44:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:52.390486 | orchestrator | 2026-04-09 02:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:55.442682 | orchestrator | 2026-04-09 02:44:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:55.443947 | orchestrator | 2026-04-09 02:44:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:55.443999 | orchestrator | 2026-04-09 02:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:44:58.494419 | orchestrator | 2026-04-09 02:44:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:44:58.495673 | orchestrator | 2026-04-09 02:44:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:44:58.495726 | orchestrator | 2026-04-09 02:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:01.541127 | orchestrator | 2026-04-09 02:45:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:01.542826 | orchestrator | 2026-04-09 02:45:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:01.542970 | orchestrator | 2026-04-09 02:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:04.579420 | orchestrator | 2026-04-09 02:45:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:04.580609 | orchestrator | 2026-04-09 02:45:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:04.580724 | orchestrator | 2026-04-09 02:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:07.630275 | orchestrator | 2026-04-09 02:45:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:07.631568 | orchestrator | 2026-04-09 02:45:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:07.631603 | orchestrator | 2026-04-09 02:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:10.678227 | orchestrator | 2026-04-09 02:45:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:10.679863 | orchestrator | 2026-04-09 02:45:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:10.680009 | orchestrator | 2026-04-09 02:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:13.733742 | orchestrator | 2026-04-09 02:45:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:13.734732 | orchestrator | 2026-04-09 02:45:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:13.734774 | orchestrator | 2026-04-09 02:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:16.782369 | orchestrator | 2026-04-09 02:45:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:16.783095 | orchestrator | 2026-04-09 02:45:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:16.783165 | orchestrator | 2026-04-09 02:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:19.826923 | orchestrator | 2026-04-09 02:45:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:19.828987 | orchestrator | 2026-04-09 02:45:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:19.829102 | orchestrator | 2026-04-09 02:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:22.879298 | orchestrator | 2026-04-09 02:45:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:22.882258 | orchestrator | 2026-04-09 02:45:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:22.882455 | orchestrator | 2026-04-09 02:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:25.932917 | orchestrator | 2026-04-09 02:45:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:25.936127 | orchestrator | 2026-04-09 02:45:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:25.936239 | orchestrator | 2026-04-09 02:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:28.978993 | orchestrator | 2026-04-09 02:45:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:28.980884 | orchestrator | 2026-04-09 02:45:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:28.980942 | orchestrator | 2026-04-09 02:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:32.028865 | orchestrator | 2026-04-09 02:45:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:32.031941 | orchestrator | 2026-04-09 02:45:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:32.032004 | orchestrator | 2026-04-09 02:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:35.086873 | orchestrator | 2026-04-09 02:45:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:35.088123 | orchestrator | 2026-04-09 02:45:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:35.088184 | orchestrator | 2026-04-09 02:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:38.136315 | orchestrator | 2026-04-09 02:45:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:38.138535 | orchestrator | 2026-04-09 02:45:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:38.138617 | orchestrator | 2026-04-09 02:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:41.192414 | orchestrator | 2026-04-09 02:45:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:41.194122 | orchestrator | 2026-04-09 02:45:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:41.194192 | orchestrator | 2026-04-09 02:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:44.239380 | orchestrator | 2026-04-09 02:45:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:44.240401 | orchestrator | 2026-04-09 02:45:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:44.240427 | orchestrator | 2026-04-09 02:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:47.287021 | orchestrator | 2026-04-09 02:45:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:47.289227 | orchestrator | 2026-04-09 02:45:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:47.289329 | orchestrator | 2026-04-09 02:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:50.344773 | orchestrator | 2026-04-09 02:45:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:50.346448 | orchestrator | 2026-04-09 02:45:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:50.346532 | orchestrator | 2026-04-09 02:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:53.396756 | orchestrator | 2026-04-09 02:45:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:53.397923 | orchestrator | 2026-04-09 02:45:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:53.398012 | orchestrator | 2026-04-09 02:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:56.444435 | orchestrator | 2026-04-09 02:45:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:56.445913 | orchestrator | 2026-04-09 02:45:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:56.445943 | orchestrator | 2026-04-09 02:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:45:59.496238 | orchestrator | 2026-04-09 02:45:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:45:59.498813 | orchestrator | 2026-04-09 02:45:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:45:59.498877 | orchestrator | 2026-04-09 02:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:02.549369 | orchestrator | 2026-04-09 02:46:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:02.551057 | orchestrator | 2026-04-09 02:46:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:02.551109 | orchestrator | 2026-04-09 02:46:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:05.605590 | orchestrator | 2026-04-09 02:46:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:05.607273 | orchestrator | 2026-04-09 02:46:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:05.607378 | orchestrator | 2026-04-09 02:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:08.658900 | orchestrator | 2026-04-09 02:46:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:08.661213 | orchestrator | 2026-04-09 02:46:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:08.661357 | orchestrator | 2026-04-09 02:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:11.706868 | orchestrator | 2026-04-09 02:46:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:11.707943 | orchestrator | 2026-04-09 02:46:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:11.708007 | orchestrator | 2026-04-09 02:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:14.758274 | orchestrator | 2026-04-09 02:46:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:14.761091 | orchestrator | 2026-04-09 02:46:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:14.761149 | orchestrator | 2026-04-09 02:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:17.810241 | orchestrator | 2026-04-09 02:46:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:17.812745 | orchestrator | 2026-04-09 02:46:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:17.812921 | orchestrator | 2026-04-09 02:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:20.868147 | orchestrator | 2026-04-09 02:46:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:20.870211 | orchestrator | 2026-04-09 02:46:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:20.870258 | orchestrator | 2026-04-09 02:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:23.917205 | orchestrator | 2026-04-09 02:46:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:23.918694 | orchestrator | 2026-04-09 02:46:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:23.918731 | orchestrator | 2026-04-09 02:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:26.971093 | orchestrator | 2026-04-09 02:46:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:26.973998 | orchestrator | 2026-04-09 02:46:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:26.974180 | orchestrator | 2026-04-09 02:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:30.015008 | orchestrator | 2026-04-09 02:46:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:30.016594 | orchestrator | 2026-04-09 02:46:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:30.016662 | orchestrator | 2026-04-09 02:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:33.060007 | orchestrator | 2026-04-09 02:46:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:33.061840 | orchestrator | 2026-04-09 02:46:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:33.061872 | orchestrator | 2026-04-09 02:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:36.103833 | orchestrator | 2026-04-09 02:46:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:36.105319 | orchestrator | 2026-04-09 02:46:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:36.105477 | orchestrator | 2026-04-09 02:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:39.152292 | orchestrator | 2026-04-09 02:46:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:39.153604 | orchestrator | 2026-04-09 02:46:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:39.153640 | orchestrator | 2026-04-09 02:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:42.210496 | orchestrator | 2026-04-09 02:46:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:42.213238 | orchestrator | 2026-04-09 02:46:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:42.213312 | orchestrator | 2026-04-09 02:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:45.267294 | orchestrator | 2026-04-09 02:46:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:45.269286 | orchestrator | 2026-04-09 02:46:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:45.269330 | orchestrator | 2026-04-09 02:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:48.320114 | orchestrator | 2026-04-09 02:46:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:48.320793 | orchestrator | 2026-04-09 02:46:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:48.320819 | orchestrator | 2026-04-09 02:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:51.367792 | orchestrator | 2026-04-09 02:46:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:51.369708 | orchestrator | 2026-04-09 02:46:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:51.369750 | orchestrator | 2026-04-09 02:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:54.426241 | orchestrator | 2026-04-09 02:46:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:54.427629 | orchestrator | 2026-04-09 02:46:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:54.427661 | orchestrator | 2026-04-09 02:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:46:57.488398 | orchestrator | 2026-04-09 02:46:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:46:57.490259 | orchestrator | 2026-04-09 02:46:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:46:57.490293 | orchestrator | 2026-04-09 02:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:00.535041 | orchestrator | 2026-04-09 02:47:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:00.537841 | orchestrator | 2026-04-09 02:47:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:00.537897 | orchestrator | 2026-04-09 02:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:03.585322 | orchestrator | 2026-04-09 02:47:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:03.588195 | orchestrator | 2026-04-09 02:47:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:03.588479 | orchestrator | 2026-04-09 02:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:06.630450 | orchestrator | 2026-04-09 02:47:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:06.632129 | orchestrator | 2026-04-09 02:47:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:06.632212 | orchestrator | 2026-04-09 02:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:09.690324 | orchestrator | 2026-04-09 02:47:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:09.693185 | orchestrator | 2026-04-09 02:47:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:09.693267 | orchestrator | 2026-04-09 02:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:12.746672 | orchestrator | 2026-04-09 02:47:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:12.749069 | orchestrator | 2026-04-09 02:47:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:12.749136 | orchestrator | 2026-04-09 02:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:15.797921 | orchestrator | 2026-04-09 02:47:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:15.800182 | orchestrator | 2026-04-09 02:47:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:15.800258 | orchestrator | 2026-04-09 02:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:18.846896 | orchestrator | 2026-04-09 02:47:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:18.849877 | orchestrator | 2026-04-09 02:47:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:18.849917 | orchestrator | 2026-04-09 02:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:21.903196 | orchestrator | 2026-04-09 02:47:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:21.906226 | orchestrator | 2026-04-09 02:47:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:21.906293 | orchestrator | 2026-04-09 02:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:24.969416 | orchestrator | 2026-04-09 02:47:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:24.971148 | orchestrator | 2026-04-09 02:47:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:24.971208 | orchestrator | 2026-04-09 02:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:28.022165 | orchestrator | 2026-04-09 02:47:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:28.025102 | orchestrator | 2026-04-09 02:47:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:28.025199 | orchestrator | 2026-04-09 02:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:31.073848 | orchestrator | 2026-04-09 02:47:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:31.074440 | orchestrator | 2026-04-09 02:47:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:31.074749 | orchestrator | 2026-04-09 02:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:34.123768 | orchestrator | 2026-04-09 02:47:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:34.126422 | orchestrator | 2026-04-09 02:47:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:34.126568 | orchestrator | 2026-04-09 02:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:37.180495 | orchestrator | 2026-04-09 02:47:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:37.181521 | orchestrator | 2026-04-09 02:47:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:37.181689 | orchestrator | 2026-04-09 02:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:40.233824 | orchestrator | 2026-04-09 02:47:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:40.235659 | orchestrator | 2026-04-09 02:47:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:40.235709 | orchestrator | 2026-04-09 02:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:43.287764 | orchestrator | 2026-04-09 02:47:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:43.291924 | orchestrator | 2026-04-09 02:47:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:43.291989 | orchestrator | 2026-04-09 02:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:46.349591 | orchestrator | 2026-04-09 02:47:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:46.351715 | orchestrator | 2026-04-09 02:47:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:46.351742 | orchestrator | 2026-04-09 02:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:49.407463 | orchestrator | 2026-04-09 02:47:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:49.411124 | orchestrator | 2026-04-09 02:47:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:49.411219 | orchestrator | 2026-04-09 02:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:52.466485 | orchestrator | 2026-04-09 02:47:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:52.470280 | orchestrator | 2026-04-09 02:47:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:52.470731 | orchestrator | 2026-04-09 02:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:55.523279 | orchestrator | 2026-04-09 02:47:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:55.525961 | orchestrator | 2026-04-09 02:47:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:55.526091 | orchestrator | 2026-04-09 02:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:47:58.573988 | orchestrator | 2026-04-09 02:47:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:47:58.576805 | orchestrator | 2026-04-09 02:47:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:47:58.577074 | orchestrator | 2026-04-09 02:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:01.627480 | orchestrator | 2026-04-09 02:48:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:01.628374 | orchestrator | 2026-04-09 02:48:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:01.628416 | orchestrator | 2026-04-09 02:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:04.671139 | orchestrator | 2026-04-09 02:48:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:04.672691 | orchestrator | 2026-04-09 02:48:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:04.672881 | orchestrator | 2026-04-09 02:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:07.724701 | orchestrator | 2026-04-09 02:48:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:07.725749 | orchestrator | 2026-04-09 02:48:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:07.725799 | orchestrator | 2026-04-09 02:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:10.778870 | orchestrator | 2026-04-09 02:48:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:10.780978 | orchestrator | 2026-04-09 02:48:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:10.781053 | orchestrator | 2026-04-09 02:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:13.828866 | orchestrator | 2026-04-09 02:48:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:13.831125 | orchestrator | 2026-04-09 02:48:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:13.831202 | orchestrator | 2026-04-09 02:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:16.877866 | orchestrator | 2026-04-09 02:48:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:16.880686 | orchestrator | 2026-04-09 02:48:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:16.880756 | orchestrator | 2026-04-09 02:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:19.925410 | orchestrator | 2026-04-09 02:48:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:19.926990 | orchestrator | 2026-04-09 02:48:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:19.927117 | orchestrator | 2026-04-09 02:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:22.981180 | orchestrator | 2026-04-09 02:48:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:22.984077 | orchestrator | 2026-04-09 02:48:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:22.984150 | orchestrator | 2026-04-09 02:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:26.036929 | orchestrator | 2026-04-09 02:48:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:26.039171 | orchestrator | 2026-04-09 02:48:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:26.039262 | orchestrator | 2026-04-09 02:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:29.093505 | orchestrator | 2026-04-09 02:48:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:29.094191 | orchestrator | 2026-04-09 02:48:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:29.094857 | orchestrator | 2026-04-09 02:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:32.140132 | orchestrator | 2026-04-09 02:48:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:32.142323 | orchestrator | 2026-04-09 02:48:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:32.142510 | orchestrator | 2026-04-09 02:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:35.194662 | orchestrator | 2026-04-09 02:48:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:35.197115 | orchestrator | 2026-04-09 02:48:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:35.197185 | orchestrator | 2026-04-09 02:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:38.248611 | orchestrator | 2026-04-09 02:48:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:38.250973 | orchestrator | 2026-04-09 02:48:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:38.251094 | orchestrator | 2026-04-09 02:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:41.302902 | orchestrator | 2026-04-09 02:48:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:41.305694 | orchestrator | 2026-04-09 02:48:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:41.305746 | orchestrator | 2026-04-09 02:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:44.355957 | orchestrator | 2026-04-09 02:48:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:44.358645 | orchestrator | 2026-04-09 02:48:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:44.358741 | orchestrator | 2026-04-09 02:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:47.405237 | orchestrator | 2026-04-09 02:48:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:47.407026 | orchestrator | 2026-04-09 02:48:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:47.407078 | orchestrator | 2026-04-09 02:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:50.456385 | orchestrator | 2026-04-09 02:48:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:50.457778 | orchestrator | 2026-04-09 02:48:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:50.457828 | orchestrator | 2026-04-09 02:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:53.502875 | orchestrator | 2026-04-09 02:48:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:53.505234 | orchestrator | 2026-04-09 02:48:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:53.505309 | orchestrator | 2026-04-09 02:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:56.562843 | orchestrator | 2026-04-09 02:48:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:56.564413 | orchestrator | 2026-04-09 02:48:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:56.565672 | orchestrator | 2026-04-09 02:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:48:59.610275 | orchestrator | 2026-04-09 02:48:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:48:59.612332 | orchestrator | 2026-04-09 02:48:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:48:59.612490 | orchestrator | 2026-04-09 02:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:02.654256 | orchestrator | 2026-04-09 02:49:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:02.657268 | orchestrator | 2026-04-09 02:49:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:02.657319 | orchestrator | 2026-04-09 02:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:05.707319 | orchestrator | 2026-04-09 02:49:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:05.711464 | orchestrator | 2026-04-09 02:49:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:05.711541 | orchestrator | 2026-04-09 02:49:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:08.758313 | orchestrator | 2026-04-09 02:49:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:08.760100 | orchestrator | 2026-04-09 02:49:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:08.760141 | orchestrator | 2026-04-09 02:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:11.812092 | orchestrator | 2026-04-09 02:49:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:11.813722 | orchestrator | 2026-04-09 02:49:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:11.813786 | orchestrator | 2026-04-09 02:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:14.860799 | orchestrator | 2026-04-09 02:49:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:14.864596 | orchestrator | 2026-04-09 02:49:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:14.864682 | orchestrator | 2026-04-09 02:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:17.914952 | orchestrator | 2026-04-09 02:49:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:17.916785 | orchestrator | 2026-04-09 02:49:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:17.916829 | orchestrator | 2026-04-09 02:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:20.964792 | orchestrator | 2026-04-09 02:49:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:20.968015 | orchestrator | 2026-04-09 02:49:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:20.968072 | orchestrator | 2026-04-09 02:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:24.023797 | orchestrator | 2026-04-09 02:49:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:24.026683 | orchestrator | 2026-04-09 02:49:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:24.026851 | orchestrator | 2026-04-09 02:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:27.082228 | orchestrator | 2026-04-09 02:49:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:27.085892 | orchestrator | 2026-04-09 02:49:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:27.085992 | orchestrator | 2026-04-09 02:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:30.132908 | orchestrator | 2026-04-09 02:49:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:30.135651 | orchestrator | 2026-04-09 02:49:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:30.135724 | orchestrator | 2026-04-09 02:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:33.188369 | orchestrator | 2026-04-09 02:49:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:33.192862 | orchestrator | 2026-04-09 02:49:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:33.192917 | orchestrator | 2026-04-09 02:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:36.243383 | orchestrator | 2026-04-09 02:49:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:36.246241 | orchestrator | 2026-04-09 02:49:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:36.246377 | orchestrator | 2026-04-09 02:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:39.303515 | orchestrator | 2026-04-09 02:49:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:39.306606 | orchestrator | 2026-04-09 02:49:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:39.306834 | orchestrator | 2026-04-09 02:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:42.361818 | orchestrator | 2026-04-09 02:49:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:42.364619 | orchestrator | 2026-04-09 02:49:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:42.364924 | orchestrator | 2026-04-09 02:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:45.415792 | orchestrator | 2026-04-09 02:49:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:45.417558 | orchestrator | 2026-04-09 02:49:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:45.417607 | orchestrator | 2026-04-09 02:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:48.464209 | orchestrator | 2026-04-09 02:49:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:48.466402 | orchestrator | 2026-04-09 02:49:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:48.466537 | orchestrator | 2026-04-09 02:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:51.510416 | orchestrator | 2026-04-09 02:49:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:51.512663 | orchestrator | 2026-04-09 02:49:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:51.512706 | orchestrator | 2026-04-09 02:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:54.560102 | orchestrator | 2026-04-09 02:49:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:54.561984 | orchestrator | 2026-04-09 02:49:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:54.562135 | orchestrator | 2026-04-09 02:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:49:57.607079 | orchestrator | 2026-04-09 02:49:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:49:57.608537 | orchestrator | 2026-04-09 02:49:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:49:57.608579 | orchestrator | 2026-04-09 02:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:00.657158 | orchestrator | 2026-04-09 02:50:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:00.658814 | orchestrator | 2026-04-09 02:50:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:00.658867 | orchestrator | 2026-04-09 02:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:03.710981 | orchestrator | 2026-04-09 02:50:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:03.712124 | orchestrator | 2026-04-09 02:50:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:03.712157 | orchestrator | 2026-04-09 02:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:06.761137 | orchestrator | 2026-04-09 02:50:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:06.763798 | orchestrator | 2026-04-09 02:50:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:06.763874 | orchestrator | 2026-04-09 02:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:09.816894 | orchestrator | 2026-04-09 02:50:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:09.818757 | orchestrator | 2026-04-09 02:50:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:09.818869 | orchestrator | 2026-04-09 02:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:12.871655 | orchestrator | 2026-04-09 02:50:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:12.875735 | orchestrator | 2026-04-09 02:50:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:12.875844 | orchestrator | 2026-04-09 02:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:15.922845 | orchestrator | 2026-04-09 02:50:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:15.924624 | orchestrator | 2026-04-09 02:50:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:15.924686 | orchestrator | 2026-04-09 02:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:18.975761 | orchestrator | 2026-04-09 02:50:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:18.978002 | orchestrator | 2026-04-09 02:50:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:18.978076 | orchestrator | 2026-04-09 02:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:22.026623 | orchestrator | 2026-04-09 02:50:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:22.027319 | orchestrator | 2026-04-09 02:50:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:22.027649 | orchestrator | 2026-04-09 02:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:25.077566 | orchestrator | 2026-04-09 02:50:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:25.079106 | orchestrator | 2026-04-09 02:50:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:25.079134 | orchestrator | 2026-04-09 02:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:28.131672 | orchestrator | 2026-04-09 02:50:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:28.133890 | orchestrator | 2026-04-09 02:50:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:28.133954 | orchestrator | 2026-04-09 02:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:31.180559 | orchestrator | 2026-04-09 02:50:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:31.182948 | orchestrator | 2026-04-09 02:50:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:31.183038 | orchestrator | 2026-04-09 02:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:34.233873 | orchestrator | 2026-04-09 02:50:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:34.237057 | orchestrator | 2026-04-09 02:50:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:34.237264 | orchestrator | 2026-04-09 02:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:37.289730 | orchestrator | 2026-04-09 02:50:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:37.291461 | orchestrator | 2026-04-09 02:50:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:37.291504 | orchestrator | 2026-04-09 02:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:40.341120 | orchestrator | 2026-04-09 02:50:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:40.342562 | orchestrator | 2026-04-09 02:50:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:40.342922 | orchestrator | 2026-04-09 02:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:43.391086 | orchestrator | 2026-04-09 02:50:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:43.393203 | orchestrator | 2026-04-09 02:50:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:43.393525 | orchestrator | 2026-04-09 02:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:46.429467 | orchestrator | 2026-04-09 02:50:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:46.430460 | orchestrator | 2026-04-09 02:50:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:46.430763 | orchestrator | 2026-04-09 02:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:49.484211 | orchestrator | 2026-04-09 02:50:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:49.489368 | orchestrator | 2026-04-09 02:50:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:49.489431 | orchestrator | 2026-04-09 02:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:52.547918 | orchestrator | 2026-04-09 02:50:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:52.551073 | orchestrator | 2026-04-09 02:50:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:52.551174 | orchestrator | 2026-04-09 02:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:55.608194 | orchestrator | 2026-04-09 02:50:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:55.610098 | orchestrator | 2026-04-09 02:50:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:55.610237 | orchestrator | 2026-04-09 02:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:50:58.668163 | orchestrator | 2026-04-09 02:50:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:50:58.670150 | orchestrator | 2026-04-09 02:50:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:50:58.670186 | orchestrator | 2026-04-09 02:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:01.710999 | orchestrator | 2026-04-09 02:51:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:01.712922 | orchestrator | 2026-04-09 02:51:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:01.713024 | orchestrator | 2026-04-09 02:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:04.759989 | orchestrator | 2026-04-09 02:51:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:04.761783 | orchestrator | 2026-04-09 02:51:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:04.761844 | orchestrator | 2026-04-09 02:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:07.806181 | orchestrator | 2026-04-09 02:51:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:07.807951 | orchestrator | 2026-04-09 02:51:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:07.808040 | orchestrator | 2026-04-09 02:51:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:10.854172 | orchestrator | 2026-04-09 02:51:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:10.856125 | orchestrator | 2026-04-09 02:51:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:10.856324 | orchestrator | 2026-04-09 02:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:13.908514 | orchestrator | 2026-04-09 02:51:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:13.909555 | orchestrator | 2026-04-09 02:51:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:13.909639 | orchestrator | 2026-04-09 02:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:16.961399 | orchestrator | 2026-04-09 02:51:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:16.963161 | orchestrator | 2026-04-09 02:51:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:16.963425 | orchestrator | 2026-04-09 02:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:20.014240 | orchestrator | 2026-04-09 02:51:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:20.015648 | orchestrator | 2026-04-09 02:51:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:20.015724 | orchestrator | 2026-04-09 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:23.060851 | orchestrator | 2026-04-09 02:51:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:23.061980 | orchestrator | 2026-04-09 02:51:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:23.062047 | orchestrator | 2026-04-09 02:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:26.111792 | orchestrator | 2026-04-09 02:51:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:26.113724 | orchestrator | 2026-04-09 02:51:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:26.113787 | orchestrator | 2026-04-09 02:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:29.162919 | orchestrator | 2026-04-09 02:51:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:29.165188 | orchestrator | 2026-04-09 02:51:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:29.165412 | orchestrator | 2026-04-09 02:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:32.211145 | orchestrator | 2026-04-09 02:51:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:32.212462 | orchestrator | 2026-04-09 02:51:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:32.212516 | orchestrator | 2026-04-09 02:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:35.255267 | orchestrator | 2026-04-09 02:51:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:35.256998 | orchestrator | 2026-04-09 02:51:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:35.257035 | orchestrator | 2026-04-09 02:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:38.305367 | orchestrator | 2026-04-09 02:51:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:38.308505 | orchestrator | 2026-04-09 02:51:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:38.308740 | orchestrator | 2026-04-09 02:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:41.347675 | orchestrator | 2026-04-09 02:51:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:41.349413 | orchestrator | 2026-04-09 02:51:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:41.349501 | orchestrator | 2026-04-09 02:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:44.404066 | orchestrator | 2026-04-09 02:51:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:44.405793 | orchestrator | 2026-04-09 02:51:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:44.405842 | orchestrator | 2026-04-09 02:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:47.452852 | orchestrator | 2026-04-09 02:51:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:47.454229 | orchestrator | 2026-04-09 02:51:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:47.454275 | orchestrator | 2026-04-09 02:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:50.492824 | orchestrator | 2026-04-09 02:51:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:50.495462 | orchestrator | 2026-04-09 02:51:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:50.495526 | orchestrator | 2026-04-09 02:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:53.534765 | orchestrator | 2026-04-09 02:51:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:53.535862 | orchestrator | 2026-04-09 02:51:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:53.535895 | orchestrator | 2026-04-09 02:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:56.577037 | orchestrator | 2026-04-09 02:51:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:56.579553 | orchestrator | 2026-04-09 02:51:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:56.579608 | orchestrator | 2026-04-09 02:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:51:59.625133 | orchestrator | 2026-04-09 02:51:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:51:59.629556 | orchestrator | 2026-04-09 02:51:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:51:59.629618 | orchestrator | 2026-04-09 02:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:02.680091 | orchestrator | 2026-04-09 02:52:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:02.684499 | orchestrator | 2026-04-09 02:52:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:02.684705 | orchestrator | 2026-04-09 02:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:05.736512 | orchestrator | 2026-04-09 02:52:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:05.740607 | orchestrator | 2026-04-09 02:52:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:05.740662 | orchestrator | 2026-04-09 02:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:08.792403 | orchestrator | 2026-04-09 02:52:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:08.794177 | orchestrator | 2026-04-09 02:52:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:08.794425 | orchestrator | 2026-04-09 02:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:11.853713 | orchestrator | 2026-04-09 02:52:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:11.855809 | orchestrator | 2026-04-09 02:52:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:11.855857 | orchestrator | 2026-04-09 02:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:14.903836 | orchestrator | 2026-04-09 02:52:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:14.905674 | orchestrator | 2026-04-09 02:52:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:14.905730 | orchestrator | 2026-04-09 02:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:17.951868 | orchestrator | 2026-04-09 02:52:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:17.954900 | orchestrator | 2026-04-09 02:52:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:17.954984 | orchestrator | 2026-04-09 02:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:21.008666 | orchestrator | 2026-04-09 02:52:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:21.011879 | orchestrator | 2026-04-09 02:52:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:21.012080 | orchestrator | 2026-04-09 02:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:24.064538 | orchestrator | 2026-04-09 02:52:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:24.065904 | orchestrator | 2026-04-09 02:52:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:24.065956 | orchestrator | 2026-04-09 02:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:27.126384 | orchestrator | 2026-04-09 02:52:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:27.133050 | orchestrator | 2026-04-09 02:52:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:27.133125 | orchestrator | 2026-04-09 02:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:30.171895 | orchestrator | 2026-04-09 02:52:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:30.173249 | orchestrator | 2026-04-09 02:52:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:30.173328 | orchestrator | 2026-04-09 02:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:33.221713 | orchestrator | 2026-04-09 02:52:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:33.222846 | orchestrator | 2026-04-09 02:52:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:33.222885 | orchestrator | 2026-04-09 02:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:36.278578 | orchestrator | 2026-04-09 02:52:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:36.280423 | orchestrator | 2026-04-09 02:52:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:36.280475 | orchestrator | 2026-04-09 02:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:39.334939 | orchestrator | 2026-04-09 02:52:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:39.335486 | orchestrator | 2026-04-09 02:52:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:39.335533 | orchestrator | 2026-04-09 02:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:42.392845 | orchestrator | 2026-04-09 02:52:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:42.392972 | orchestrator | 2026-04-09 02:52:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:42.393105 | orchestrator | 2026-04-09 02:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:45.441541 | orchestrator | 2026-04-09 02:52:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:45.443306 | orchestrator | 2026-04-09 02:52:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:45.443356 | orchestrator | 2026-04-09 02:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:48.495949 | orchestrator | 2026-04-09 02:52:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:48.496819 | orchestrator | 2026-04-09 02:52:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:48.496861 | orchestrator | 2026-04-09 02:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:51.542875 | orchestrator | 2026-04-09 02:52:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:51.544881 | orchestrator | 2026-04-09 02:52:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:51.544928 | orchestrator | 2026-04-09 02:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:54.592995 | orchestrator | 2026-04-09 02:52:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:54.593651 | orchestrator | 2026-04-09 02:52:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:54.593689 | orchestrator | 2026-04-09 02:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:52:57.644379 | orchestrator | 2026-04-09 02:52:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:52:57.645527 | orchestrator | 2026-04-09 02:52:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:52:57.645698 | orchestrator | 2026-04-09 02:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:00.696597 | orchestrator | 2026-04-09 02:53:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:00.698952 | orchestrator | 2026-04-09 02:53:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:00.699021 | orchestrator | 2026-04-09 02:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:03.743417 | orchestrator | 2026-04-09 02:53:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:03.744836 | orchestrator | 2026-04-09 02:53:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:03.744919 | orchestrator | 2026-04-09 02:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:06.785758 | orchestrator | 2026-04-09 02:53:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:06.786658 | orchestrator | 2026-04-09 02:53:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:06.786754 | orchestrator | 2026-04-09 02:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:09.843782 | orchestrator | 2026-04-09 02:53:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:09.845941 | orchestrator | 2026-04-09 02:53:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:09.846118 | orchestrator | 2026-04-09 02:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:12.894305 | orchestrator | 2026-04-09 02:53:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:12.895846 | orchestrator | 2026-04-09 02:53:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:12.895899 | orchestrator | 2026-04-09 02:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:15.948601 | orchestrator | 2026-04-09 02:53:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:15.951242 | orchestrator | 2026-04-09 02:53:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:15.951395 | orchestrator | 2026-04-09 02:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:19.007818 | orchestrator | 2026-04-09 02:53:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:19.009581 | orchestrator | 2026-04-09 02:53:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:19.009629 | orchestrator | 2026-04-09 02:53:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:22.062498 | orchestrator | 2026-04-09 02:53:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:22.063859 | orchestrator | 2026-04-09 02:53:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:22.063909 | orchestrator | 2026-04-09 02:53:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:25.111570 | orchestrator | 2026-04-09 02:53:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:25.112974 | orchestrator | 2026-04-09 02:53:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:25.113011 | orchestrator | 2026-04-09 02:53:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:28.161231 | orchestrator | 2026-04-09 02:53:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:28.163208 | orchestrator | 2026-04-09 02:53:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:28.163317 | orchestrator | 2026-04-09 02:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:31.211897 | orchestrator | 2026-04-09 02:53:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:31.213870 | orchestrator | 2026-04-09 02:53:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:31.213950 | orchestrator | 2026-04-09 02:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:34.265496 | orchestrator | 2026-04-09 02:53:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:34.267917 | orchestrator | 2026-04-09 02:53:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:34.268042 | orchestrator | 2026-04-09 02:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:37.310525 | orchestrator | 2026-04-09 02:53:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:37.312439 | orchestrator | 2026-04-09 02:53:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:37.312514 | orchestrator | 2026-04-09 02:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:40.358870 | orchestrator | 2026-04-09 02:53:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:40.361028 | orchestrator | 2026-04-09 02:53:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:40.361091 | orchestrator | 2026-04-09 02:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:43.400234 | orchestrator | 2026-04-09 02:53:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:43.401412 | orchestrator | 2026-04-09 02:53:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:43.401454 | orchestrator | 2026-04-09 02:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:46.451360 | orchestrator | 2026-04-09 02:53:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:46.453397 | orchestrator | 2026-04-09 02:53:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:46.453491 | orchestrator | 2026-04-09 02:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:49.504590 | orchestrator | 2026-04-09 02:53:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:49.506319 | orchestrator | 2026-04-09 02:53:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:49.506380 | orchestrator | 2026-04-09 02:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:52.561923 | orchestrator | 2026-04-09 02:53:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:52.563613 | orchestrator | 2026-04-09 02:53:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:52.563917 | orchestrator | 2026-04-09 02:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:55.618093 | orchestrator | 2026-04-09 02:53:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:55.620970 | orchestrator | 2026-04-09 02:53:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:55.621041 | orchestrator | 2026-04-09 02:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:53:58.670844 | orchestrator | 2026-04-09 02:53:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:53:58.672436 | orchestrator | 2026-04-09 02:53:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:53:58.672474 | orchestrator | 2026-04-09 02:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:01.728233 | orchestrator | 2026-04-09 02:54:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:01.732336 | orchestrator | 2026-04-09 02:54:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:01.732422 | orchestrator | 2026-04-09 02:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:04.782764 | orchestrator | 2026-04-09 02:54:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:04.783873 | orchestrator | 2026-04-09 02:54:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:04.783925 | orchestrator | 2026-04-09 02:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:07.831366 | orchestrator | 2026-04-09 02:54:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:07.832310 | orchestrator | 2026-04-09 02:54:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:07.832341 | orchestrator | 2026-04-09 02:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:10.881547 | orchestrator | 2026-04-09 02:54:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:10.883696 | orchestrator | 2026-04-09 02:54:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:10.883751 | orchestrator | 2026-04-09 02:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:13.930344 | orchestrator | 2026-04-09 02:54:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:13.930462 | orchestrator | 2026-04-09 02:54:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:13.930489 | orchestrator | 2026-04-09 02:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:16.974439 | orchestrator | 2026-04-09 02:54:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:16.975064 | orchestrator | 2026-04-09 02:54:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:16.975156 | orchestrator | 2026-04-09 02:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:20.018864 | orchestrator | 2026-04-09 02:54:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:20.020841 | orchestrator | 2026-04-09 02:54:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:20.020892 | orchestrator | 2026-04-09 02:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:23.066889 | orchestrator | 2026-04-09 02:54:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:23.068049 | orchestrator | 2026-04-09 02:54:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:23.068071 | orchestrator | 2026-04-09 02:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:26.119563 | orchestrator | 2026-04-09 02:54:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:26.121072 | orchestrator | 2026-04-09 02:54:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:26.121511 | orchestrator | 2026-04-09 02:54:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:29.176167 | orchestrator | 2026-04-09 02:54:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:29.178600 | orchestrator | 2026-04-09 02:54:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:29.178688 | orchestrator | 2026-04-09 02:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:32.227890 | orchestrator | 2026-04-09 02:54:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:32.230323 | orchestrator | 2026-04-09 02:54:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:32.230959 | orchestrator | 2026-04-09 02:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:35.277724 | orchestrator | 2026-04-09 02:54:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:35.280015 | orchestrator | 2026-04-09 02:54:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:35.280064 | orchestrator | 2026-04-09 02:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:38.330247 | orchestrator | 2026-04-09 02:54:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:38.331122 | orchestrator | 2026-04-09 02:54:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:38.331200 | orchestrator | 2026-04-09 02:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:41.372379 | orchestrator | 2026-04-09 02:54:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:41.373898 | orchestrator | 2026-04-09 02:54:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:41.373937 | orchestrator | 2026-04-09 02:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:44.411528 | orchestrator | 2026-04-09 02:54:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:44.412861 | orchestrator | 2026-04-09 02:54:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:44.412883 | orchestrator | 2026-04-09 02:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:47.454784 | orchestrator | 2026-04-09 02:54:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:47.455583 | orchestrator | 2026-04-09 02:54:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:47.455615 | orchestrator | 2026-04-09 02:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:50.493827 | orchestrator | 2026-04-09 02:54:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:50.495734 | orchestrator | 2026-04-09 02:54:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:50.495785 | orchestrator | 2026-04-09 02:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:53.526620 | orchestrator | 2026-04-09 02:54:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:53.529357 | orchestrator | 2026-04-09 02:54:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:53.529387 | orchestrator | 2026-04-09 02:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:56.569511 | orchestrator | 2026-04-09 02:54:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:56.573570 | orchestrator | 2026-04-09 02:54:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:56.574772 | orchestrator | 2026-04-09 02:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:54:59.629948 | orchestrator | 2026-04-09 02:54:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:54:59.632215 | orchestrator | 2026-04-09 02:54:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:54:59.632338 | orchestrator | 2026-04-09 02:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:02.679831 | orchestrator | 2026-04-09 02:55:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:02.681632 | orchestrator | 2026-04-09 02:55:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:02.681706 | orchestrator | 2026-04-09 02:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:05.733468 | orchestrator | 2026-04-09 02:55:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:05.734567 | orchestrator | 2026-04-09 02:55:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:05.734677 | orchestrator | 2026-04-09 02:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:08.789022 | orchestrator | 2026-04-09 02:55:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:08.791867 | orchestrator | 2026-04-09 02:55:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:08.792038 | orchestrator | 2026-04-09 02:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:11.838567 | orchestrator | 2026-04-09 02:55:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:11.839607 | orchestrator | 2026-04-09 02:55:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:11.839631 | orchestrator | 2026-04-09 02:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:14.876296 | orchestrator | 2026-04-09 02:55:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:14.877567 | orchestrator | 2026-04-09 02:55:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:14.877618 | orchestrator | 2026-04-09 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:17.920401 | orchestrator | 2026-04-09 02:55:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:17.921021 | orchestrator | 2026-04-09 02:55:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:17.921066 | orchestrator | 2026-04-09 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:20.960032 | orchestrator | 2026-04-09 02:55:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:20.961874 | orchestrator | 2026-04-09 02:55:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:20.961937 | orchestrator | 2026-04-09 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:24.012917 | orchestrator | 2026-04-09 02:55:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:24.014453 | orchestrator | 2026-04-09 02:55:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:24.014508 | orchestrator | 2026-04-09 02:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:27.056659 | orchestrator | 2026-04-09 02:55:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:27.058607 | orchestrator | 2026-04-09 02:55:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:27.058798 | orchestrator | 2026-04-09 02:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:30.103089 | orchestrator | 2026-04-09 02:55:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:30.103943 | orchestrator | 2026-04-09 02:55:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:30.103967 | orchestrator | 2026-04-09 02:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:33.151538 | orchestrator | 2026-04-09 02:55:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:33.154842 | orchestrator | 2026-04-09 02:55:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:33.155021 | orchestrator | 2026-04-09 02:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:36.213523 | orchestrator | 2026-04-09 02:55:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:36.216018 | orchestrator | 2026-04-09 02:55:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:36.216221 | orchestrator | 2026-04-09 02:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:39.262661 | orchestrator | 2026-04-09 02:55:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:39.264329 | orchestrator | 2026-04-09 02:55:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:39.264464 | orchestrator | 2026-04-09 02:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:42.306178 | orchestrator | 2026-04-09 02:55:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:42.307711 | orchestrator | 2026-04-09 02:55:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:42.307795 | orchestrator | 2026-04-09 02:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:45.357717 | orchestrator | 2026-04-09 02:55:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:45.359776 | orchestrator | 2026-04-09 02:55:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:45.359816 | orchestrator | 2026-04-09 02:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:48.412506 | orchestrator | 2026-04-09 02:55:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:48.414355 | orchestrator | 2026-04-09 02:55:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:48.414419 | orchestrator | 2026-04-09 02:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:51.456641 | orchestrator | 2026-04-09 02:55:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:51.459720 | orchestrator | 2026-04-09 02:55:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:51.459783 | orchestrator | 2026-04-09 02:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:54.506511 | orchestrator | 2026-04-09 02:55:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:54.508537 | orchestrator | 2026-04-09 02:55:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:54.508589 | orchestrator | 2026-04-09 02:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:55:57.557003 | orchestrator | 2026-04-09 02:55:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:55:57.557875 | orchestrator | 2026-04-09 02:55:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:55:57.557961 | orchestrator | 2026-04-09 02:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:00.614232 | orchestrator | 2026-04-09 02:56:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:00.618782 | orchestrator | 2026-04-09 02:56:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:00.618870 | orchestrator | 2026-04-09 02:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:03.656768 | orchestrator | 2026-04-09 02:56:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:03.658615 | orchestrator | 2026-04-09 02:56:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:03.658677 | orchestrator | 2026-04-09 02:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:06.701011 | orchestrator | 2026-04-09 02:56:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:06.702582 | orchestrator | 2026-04-09 02:56:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:06.702663 | orchestrator | 2026-04-09 02:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:09.746314 | orchestrator | 2026-04-09 02:56:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:09.748199 | orchestrator | 2026-04-09 02:56:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:09.748372 | orchestrator | 2026-04-09 02:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:12.792460 | orchestrator | 2026-04-09 02:56:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:12.794101 | orchestrator | 2026-04-09 02:56:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:12.794212 | orchestrator | 2026-04-09 02:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:15.841927 | orchestrator | 2026-04-09 02:56:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:15.843160 | orchestrator | 2026-04-09 02:56:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:15.843323 | orchestrator | 2026-04-09 02:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:18.896759 | orchestrator | 2026-04-09 02:56:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:18.898469 | orchestrator | 2026-04-09 02:56:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:18.898518 | orchestrator | 2026-04-09 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:21.955890 | orchestrator | 2026-04-09 02:56:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:21.958357 | orchestrator | 2026-04-09 02:56:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:21.958577 | orchestrator | 2026-04-09 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:25.006597 | orchestrator | 2026-04-09 02:56:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:25.009684 | orchestrator | 2026-04-09 02:56:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:25.009750 | orchestrator | 2026-04-09 02:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:28.052550 | orchestrator | 2026-04-09 02:56:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:28.052898 | orchestrator | 2026-04-09 02:56:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:28.052941 | orchestrator | 2026-04-09 02:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:31.096358 | orchestrator | 2026-04-09 02:56:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:31.097236 | orchestrator | 2026-04-09 02:56:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:31.097268 | orchestrator | 2026-04-09 02:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:34.145691 | orchestrator | 2026-04-09 02:56:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:34.148189 | orchestrator | 2026-04-09 02:56:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:34.148249 | orchestrator | 2026-04-09 02:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:37.191073 | orchestrator | 2026-04-09 02:56:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:37.193172 | orchestrator | 2026-04-09 02:56:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:37.193259 | orchestrator | 2026-04-09 02:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:40.230199 | orchestrator | 2026-04-09 02:56:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:40.231550 | orchestrator | 2026-04-09 02:56:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:40.231582 | orchestrator | 2026-04-09 02:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:43.274199 | orchestrator | 2026-04-09 02:56:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:43.276768 | orchestrator | 2026-04-09 02:56:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:43.276837 | orchestrator | 2026-04-09 02:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:46.325426 | orchestrator | 2026-04-09 02:56:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:46.328250 | orchestrator | 2026-04-09 02:56:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:46.328511 | orchestrator | 2026-04-09 02:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:49.385444 | orchestrator | 2026-04-09 02:56:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:49.388172 | orchestrator | 2026-04-09 02:56:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:49.388239 | orchestrator | 2026-04-09 02:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:52.435689 | orchestrator | 2026-04-09 02:56:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:52.437545 | orchestrator | 2026-04-09 02:56:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:52.437603 | orchestrator | 2026-04-09 02:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:55.486282 | orchestrator | 2026-04-09 02:56:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:55.488491 | orchestrator | 2026-04-09 02:56:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:55.488535 | orchestrator | 2026-04-09 02:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:56:58.537141 | orchestrator | 2026-04-09 02:56:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:56:58.539443 | orchestrator | 2026-04-09 02:56:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:56:58.539502 | orchestrator | 2026-04-09 02:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:01.583940 | orchestrator | 2026-04-09 02:57:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:01.584820 | orchestrator | 2026-04-09 02:57:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:01.584882 | orchestrator | 2026-04-09 02:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:04.635790 | orchestrator | 2026-04-09 02:57:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:04.638276 | orchestrator | 2026-04-09 02:57:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:04.638373 | orchestrator | 2026-04-09 02:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:07.683732 | orchestrator | 2026-04-09 02:57:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:07.685198 | orchestrator | 2026-04-09 02:57:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:07.685254 | orchestrator | 2026-04-09 02:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:10.728117 | orchestrator | 2026-04-09 02:57:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:10.731468 | orchestrator | 2026-04-09 02:57:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:10.731549 | orchestrator | 2026-04-09 02:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:13.779728 | orchestrator | 2026-04-09 02:57:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:13.782317 | orchestrator | 2026-04-09 02:57:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:13.782412 | orchestrator | 2026-04-09 02:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:16.840784 | orchestrator | 2026-04-09 02:57:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:16.842956 | orchestrator | 2026-04-09 02:57:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:16.843024 | orchestrator | 2026-04-09 02:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:19.888929 | orchestrator | 2026-04-09 02:57:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:19.889680 | orchestrator | 2026-04-09 02:57:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:19.889724 | orchestrator | 2026-04-09 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:22.945588 | orchestrator | 2026-04-09 02:57:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:22.946405 | orchestrator | 2026-04-09 02:57:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:22.946551 | orchestrator | 2026-04-09 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:26.005002 | orchestrator | 2026-04-09 02:57:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:26.006313 | orchestrator | 2026-04-09 02:57:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:26.006707 | orchestrator | 2026-04-09 02:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:29.070370 | orchestrator | 2026-04-09 02:57:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:29.072224 | orchestrator | 2026-04-09 02:57:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:29.072295 | orchestrator | 2026-04-09 02:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:32.123180 | orchestrator | 2026-04-09 02:57:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:32.124902 | orchestrator | 2026-04-09 02:57:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:32.124931 | orchestrator | 2026-04-09 02:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:35.179826 | orchestrator | 2026-04-09 02:57:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:35.182929 | orchestrator | 2026-04-09 02:57:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:35.183006 | orchestrator | 2026-04-09 02:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:38.223155 | orchestrator | 2026-04-09 02:57:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:38.226954 | orchestrator | 2026-04-09 02:57:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:38.227095 | orchestrator | 2026-04-09 02:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:41.271697 | orchestrator | 2026-04-09 02:57:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:41.274231 | orchestrator | 2026-04-09 02:57:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:41.274353 | orchestrator | 2026-04-09 02:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:44.317239 | orchestrator | 2026-04-09 02:57:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:44.319454 | orchestrator | 2026-04-09 02:57:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:44.319520 | orchestrator | 2026-04-09 02:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:47.368443 | orchestrator | 2026-04-09 02:57:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:47.370222 | orchestrator | 2026-04-09 02:57:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:47.370272 | orchestrator | 2026-04-09 02:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:50.418868 | orchestrator | 2026-04-09 02:57:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:50.421238 | orchestrator | 2026-04-09 02:57:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:50.422270 | orchestrator | 2026-04-09 02:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:53.469776 | orchestrator | 2026-04-09 02:57:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:53.472513 | orchestrator | 2026-04-09 02:57:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:53.472620 | orchestrator | 2026-04-09 02:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:56.523212 | orchestrator | 2026-04-09 02:57:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:56.524507 | orchestrator | 2026-04-09 02:57:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:56.524538 | orchestrator | 2026-04-09 02:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:57:59.573584 | orchestrator | 2026-04-09 02:57:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:57:59.574233 | orchestrator | 2026-04-09 02:57:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:57:59.574256 | orchestrator | 2026-04-09 02:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:02.623165 | orchestrator | 2026-04-09 02:58:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:02.625932 | orchestrator | 2026-04-09 02:58:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:02.625989 | orchestrator | 2026-04-09 02:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:05.677267 | orchestrator | 2026-04-09 02:58:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:05.678488 | orchestrator | 2026-04-09 02:58:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:05.678560 | orchestrator | 2026-04-09 02:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:08.729306 | orchestrator | 2026-04-09 02:58:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:08.730632 | orchestrator | 2026-04-09 02:58:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:08.730738 | orchestrator | 2026-04-09 02:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:11.775926 | orchestrator | 2026-04-09 02:58:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:11.776902 | orchestrator | 2026-04-09 02:58:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:11.777020 | orchestrator | 2026-04-09 02:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:14.822270 | orchestrator | 2026-04-09 02:58:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:14.823530 | orchestrator | 2026-04-09 02:58:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:14.823587 | orchestrator | 2026-04-09 02:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:17.875398 | orchestrator | 2026-04-09 02:58:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:17.877383 | orchestrator | 2026-04-09 02:58:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:17.877416 | orchestrator | 2026-04-09 02:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:20.927700 | orchestrator | 2026-04-09 02:58:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:20.929235 | orchestrator | 2026-04-09 02:58:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:20.929766 | orchestrator | 2026-04-09 02:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:23.972195 | orchestrator | 2026-04-09 02:58:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:23.973079 | orchestrator | 2026-04-09 02:58:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:23.973119 | orchestrator | 2026-04-09 02:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:27.017355 | orchestrator | 2026-04-09 02:58:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:27.019366 | orchestrator | 2026-04-09 02:58:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:27.019406 | orchestrator | 2026-04-09 02:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:30.061681 | orchestrator | 2026-04-09 02:58:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:30.062900 | orchestrator | 2026-04-09 02:58:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:30.062964 | orchestrator | 2026-04-09 02:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:33.115075 | orchestrator | 2026-04-09 02:58:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:33.117563 | orchestrator | 2026-04-09 02:58:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:33.117716 | orchestrator | 2026-04-09 02:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:36.169473 | orchestrator | 2026-04-09 02:58:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:36.171094 | orchestrator | 2026-04-09 02:58:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:36.171133 | orchestrator | 2026-04-09 02:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:39.221986 | orchestrator | 2026-04-09 02:58:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:39.223473 | orchestrator | 2026-04-09 02:58:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:39.223513 | orchestrator | 2026-04-09 02:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:42.272514 | orchestrator | 2026-04-09 02:58:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:42.274352 | orchestrator | 2026-04-09 02:58:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:42.274382 | orchestrator | 2026-04-09 02:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:45.322227 | orchestrator | 2026-04-09 02:58:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:45.324822 | orchestrator | 2026-04-09 02:58:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:45.324884 | orchestrator | 2026-04-09 02:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:48.371092 | orchestrator | 2026-04-09 02:58:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:48.371313 | orchestrator | 2026-04-09 02:58:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:48.371626 | orchestrator | 2026-04-09 02:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:51.423758 | orchestrator | 2026-04-09 02:58:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:51.424868 | orchestrator | 2026-04-09 02:58:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:51.424927 | orchestrator | 2026-04-09 02:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:54.473600 | orchestrator | 2026-04-09 02:58:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:54.474810 | orchestrator | 2026-04-09 02:58:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:54.474857 | orchestrator | 2026-04-09 02:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:58:57.522290 | orchestrator | 2026-04-09 02:58:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:58:57.527292 | orchestrator | 2026-04-09 02:58:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:58:57.527378 | orchestrator | 2026-04-09 02:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:00.573795 | orchestrator | 2026-04-09 02:59:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:00.576324 | orchestrator | 2026-04-09 02:59:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:00.576401 | orchestrator | 2026-04-09 02:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:03.625449 | orchestrator | 2026-04-09 02:59:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:03.628472 | orchestrator | 2026-04-09 02:59:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:03.628682 | orchestrator | 2026-04-09 02:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:06.686250 | orchestrator | 2026-04-09 02:59:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:06.687454 | orchestrator | 2026-04-09 02:59:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:06.687558 | orchestrator | 2026-04-09 02:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:09.736133 | orchestrator | 2026-04-09 02:59:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:09.738964 | orchestrator | 2026-04-09 02:59:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:09.739058 | orchestrator | 2026-04-09 02:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:12.785874 | orchestrator | 2026-04-09 02:59:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:12.786762 | orchestrator | 2026-04-09 02:59:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:12.787119 | orchestrator | 2026-04-09 02:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:15.829938 | orchestrator | 2026-04-09 02:59:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:15.830768 | orchestrator | 2026-04-09 02:59:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:15.830881 | orchestrator | 2026-04-09 02:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:18.878861 | orchestrator | 2026-04-09 02:59:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:18.881825 | orchestrator | 2026-04-09 02:59:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:18.881915 | orchestrator | 2026-04-09 02:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:21.937788 | orchestrator | 2026-04-09 02:59:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:21.939620 | orchestrator | 2026-04-09 02:59:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:21.939653 | orchestrator | 2026-04-09 02:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:24.989305 | orchestrator | 2026-04-09 02:59:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:24.991144 | orchestrator | 2026-04-09 02:59:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:24.991270 | orchestrator | 2026-04-09 02:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:28.030325 | orchestrator | 2026-04-09 02:59:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:28.031367 | orchestrator | 2026-04-09 02:59:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:28.031419 | orchestrator | 2026-04-09 02:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:31.067936 | orchestrator | 2026-04-09 02:59:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:31.068734 | orchestrator | 2026-04-09 02:59:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:31.069065 | orchestrator | 2026-04-09 02:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:34.113271 | orchestrator | 2026-04-09 02:59:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:34.114917 | orchestrator | 2026-04-09 02:59:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:34.114973 | orchestrator | 2026-04-09 02:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:37.162928 | orchestrator | 2026-04-09 02:59:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:37.165059 | orchestrator | 2026-04-09 02:59:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:37.165204 | orchestrator | 2026-04-09 02:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:40.202465 | orchestrator | 2026-04-09 02:59:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:40.203031 | orchestrator | 2026-04-09 02:59:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:40.203101 | orchestrator | 2026-04-09 02:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:43.241916 | orchestrator | 2026-04-09 02:59:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:43.242742 | orchestrator | 2026-04-09 02:59:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:43.242866 | orchestrator | 2026-04-09 02:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:46.293133 | orchestrator | 2026-04-09 02:59:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:46.295930 | orchestrator | 2026-04-09 02:59:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:46.296007 | orchestrator | 2026-04-09 02:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:49.345141 | orchestrator | 2026-04-09 02:59:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:49.346283 | orchestrator | 2026-04-09 02:59:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:49.346329 | orchestrator | 2026-04-09 02:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:52.395494 | orchestrator | 2026-04-09 02:59:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:52.396733 | orchestrator | 2026-04-09 02:59:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:52.396778 | orchestrator | 2026-04-09 02:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:55.447793 | orchestrator | 2026-04-09 02:59:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:55.448893 | orchestrator | 2026-04-09 02:59:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:55.449031 | orchestrator | 2026-04-09 02:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 02:59:58.495619 | orchestrator | 2026-04-09 02:59:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 02:59:58.496663 | orchestrator | 2026-04-09 02:59:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 02:59:58.496703 | orchestrator | 2026-04-09 02:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:01.540095 | orchestrator | 2026-04-09 03:00:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:01.542963 | orchestrator | 2026-04-09 03:00:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:01.543069 | orchestrator | 2026-04-09 03:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:04.580269 | orchestrator | 2026-04-09 03:00:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:04.580596 | orchestrator | 2026-04-09 03:00:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:04.580761 | orchestrator | 2026-04-09 03:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:07.619721 | orchestrator | 2026-04-09 03:00:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:07.621367 | orchestrator | 2026-04-09 03:00:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:07.621409 | orchestrator | 2026-04-09 03:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:10.665628 | orchestrator | 2026-04-09 03:00:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:10.666976 | orchestrator | 2026-04-09 03:00:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:10.667081 | orchestrator | 2026-04-09 03:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:13.704015 | orchestrator | 2026-04-09 03:00:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:13.706585 | orchestrator | 2026-04-09 03:00:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:13.706646 | orchestrator | 2026-04-09 03:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:16.756613 | orchestrator | 2026-04-09 03:00:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:16.760632 | orchestrator | 2026-04-09 03:00:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:16.760693 | orchestrator | 2026-04-09 03:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:19.801084 | orchestrator | 2026-04-09 03:00:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:19.803195 | orchestrator | 2026-04-09 03:00:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:19.803235 | orchestrator | 2026-04-09 03:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:22.846004 | orchestrator | 2026-04-09 03:00:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:22.846284 | orchestrator | 2026-04-09 03:00:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:22.846443 | orchestrator | 2026-04-09 03:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:25.892752 | orchestrator | 2026-04-09 03:00:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:25.892848 | orchestrator | 2026-04-09 03:00:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:25.892861 | orchestrator | 2026-04-09 03:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:28.937528 | orchestrator | 2026-04-09 03:00:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:28.939498 | orchestrator | 2026-04-09 03:00:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:28.939594 | orchestrator | 2026-04-09 03:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:31.986212 | orchestrator | 2026-04-09 03:00:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:31.987899 | orchestrator | 2026-04-09 03:00:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:31.987934 | orchestrator | 2026-04-09 03:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:35.026210 | orchestrator | 2026-04-09 03:00:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:35.027580 | orchestrator | 2026-04-09 03:00:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:35.027609 | orchestrator | 2026-04-09 03:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:38.080145 | orchestrator | 2026-04-09 03:00:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:38.080628 | orchestrator | 2026-04-09 03:00:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:38.080650 | orchestrator | 2026-04-09 03:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:41.135861 | orchestrator | 2026-04-09 03:00:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:41.137114 | orchestrator | 2026-04-09 03:00:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:41.137147 | orchestrator | 2026-04-09 03:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:44.186663 | orchestrator | 2026-04-09 03:00:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:44.188545 | orchestrator | 2026-04-09 03:00:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:44.188650 | orchestrator | 2026-04-09 03:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:47.240303 | orchestrator | 2026-04-09 03:00:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:47.241196 | orchestrator | 2026-04-09 03:00:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:47.241226 | orchestrator | 2026-04-09 03:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:50.287096 | orchestrator | 2026-04-09 03:00:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:50.287458 | orchestrator | 2026-04-09 03:00:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:50.287479 | orchestrator | 2026-04-09 03:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:53.342703 | orchestrator | 2026-04-09 03:00:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:53.343946 | orchestrator | 2026-04-09 03:00:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:53.344004 | orchestrator | 2026-04-09 03:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:56.392989 | orchestrator | 2026-04-09 03:00:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:56.394172 | orchestrator | 2026-04-09 03:00:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:56.394198 | orchestrator | 2026-04-09 03:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:00:59.443774 | orchestrator | 2026-04-09 03:00:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:00:59.445165 | orchestrator | 2026-04-09 03:00:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:00:59.445429 | orchestrator | 2026-04-09 03:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:02.493364 | orchestrator | 2026-04-09 03:01:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:02.495035 | orchestrator | 2026-04-09 03:01:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:02.495102 | orchestrator | 2026-04-09 03:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:05.544868 | orchestrator | 2026-04-09 03:01:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:05.546492 | orchestrator | 2026-04-09 03:01:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:05.546560 | orchestrator | 2026-04-09 03:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:08.595204 | orchestrator | 2026-04-09 03:01:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:08.596971 | orchestrator | 2026-04-09 03:01:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:08.597016 | orchestrator | 2026-04-09 03:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:11.641985 | orchestrator | 2026-04-09 03:01:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:11.643826 | orchestrator | 2026-04-09 03:01:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:11.643878 | orchestrator | 2026-04-09 03:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:14.689847 | orchestrator | 2026-04-09 03:01:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:14.691980 | orchestrator | 2026-04-09 03:01:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:14.692022 | orchestrator | 2026-04-09 03:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:17.736491 | orchestrator | 2026-04-09 03:01:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:17.739640 | orchestrator | 2026-04-09 03:01:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:17.739709 | orchestrator | 2026-04-09 03:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:20.787480 | orchestrator | 2026-04-09 03:01:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:20.789317 | orchestrator | 2026-04-09 03:01:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:20.789373 | orchestrator | 2026-04-09 03:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:23.832324 | orchestrator | 2026-04-09 03:01:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:23.833409 | orchestrator | 2026-04-09 03:01:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:23.833484 | orchestrator | 2026-04-09 03:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:26.880055 | orchestrator | 2026-04-09 03:01:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:26.881060 | orchestrator | 2026-04-09 03:01:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:26.881144 | orchestrator | 2026-04-09 03:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:29.925612 | orchestrator | 2026-04-09 03:01:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:29.927913 | orchestrator | 2026-04-09 03:01:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:29.928001 | orchestrator | 2026-04-09 03:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:32.978419 | orchestrator | 2026-04-09 03:01:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:32.980481 | orchestrator | 2026-04-09 03:01:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:32.980573 | orchestrator | 2026-04-09 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:36.043969 | orchestrator | 2026-04-09 03:01:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:36.044040 | orchestrator | 2026-04-09 03:01:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:36.044049 | orchestrator | 2026-04-09 03:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:39.090686 | orchestrator | 2026-04-09 03:01:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:39.095633 | orchestrator | 2026-04-09 03:01:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:39.095725 | orchestrator | 2026-04-09 03:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:42.147019 | orchestrator | 2026-04-09 03:01:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:42.151510 | orchestrator | 2026-04-09 03:01:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:42.151599 | orchestrator | 2026-04-09 03:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:45.204013 | orchestrator | 2026-04-09 03:01:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:45.207358 | orchestrator | 2026-04-09 03:01:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:45.207444 | orchestrator | 2026-04-09 03:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:48.260549 | orchestrator | 2026-04-09 03:01:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:48.263369 | orchestrator | 2026-04-09 03:01:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:48.263431 | orchestrator | 2026-04-09 03:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:51.315642 | orchestrator | 2026-04-09 03:01:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:51.317541 | orchestrator | 2026-04-09 03:01:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:51.317588 | orchestrator | 2026-04-09 03:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:54.360937 | orchestrator | 2026-04-09 03:01:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:54.362429 | orchestrator | 2026-04-09 03:01:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:54.362761 | orchestrator | 2026-04-09 03:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:01:57.416293 | orchestrator | 2026-04-09 03:01:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:01:57.417364 | orchestrator | 2026-04-09 03:01:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:01:57.417414 | orchestrator | 2026-04-09 03:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:00.468419 | orchestrator | 2026-04-09 03:02:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:00.469300 | orchestrator | 2026-04-09 03:02:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:00.469437 | orchestrator | 2026-04-09 03:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:03.517323 | orchestrator | 2026-04-09 03:02:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:03.518703 | orchestrator | 2026-04-09 03:02:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:03.518779 | orchestrator | 2026-04-09 03:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:06.566306 | orchestrator | 2026-04-09 03:02:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:06.566423 | orchestrator | 2026-04-09 03:02:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:06.566442 | orchestrator | 2026-04-09 03:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:09.615693 | orchestrator | 2026-04-09 03:02:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:09.617200 | orchestrator | 2026-04-09 03:02:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:09.617435 | orchestrator | 2026-04-09 03:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:12.668716 | orchestrator | 2026-04-09 03:02:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:12.670843 | orchestrator | 2026-04-09 03:02:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:12.670928 | orchestrator | 2026-04-09 03:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:15.719620 | orchestrator | 2026-04-09 03:02:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:15.721434 | orchestrator | 2026-04-09 03:02:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:15.721532 | orchestrator | 2026-04-09 03:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:18.774399 | orchestrator | 2026-04-09 03:02:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:18.775455 | orchestrator | 2026-04-09 03:02:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:18.775508 | orchestrator | 2026-04-09 03:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:21.827152 | orchestrator | 2026-04-09 03:02:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:21.829687 | orchestrator | 2026-04-09 03:02:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:21.829776 | orchestrator | 2026-04-09 03:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:24.871142 | orchestrator | 2026-04-09 03:02:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:24.872831 | orchestrator | 2026-04-09 03:02:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:24.873004 | orchestrator | 2026-04-09 03:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:27.919216 | orchestrator | 2026-04-09 03:02:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:27.922181 | orchestrator | 2026-04-09 03:02:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:27.922257 | orchestrator | 2026-04-09 03:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:30.967623 | orchestrator | 2026-04-09 03:02:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:30.969311 | orchestrator | 2026-04-09 03:02:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:30.969363 | orchestrator | 2026-04-09 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:34.016734 | orchestrator | 2026-04-09 03:02:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:34.016986 | orchestrator | 2026-04-09 03:02:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:34.017016 | orchestrator | 2026-04-09 03:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:37.060119 | orchestrator | 2026-04-09 03:02:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:37.060323 | orchestrator | 2026-04-09 03:02:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:37.060339 | orchestrator | 2026-04-09 03:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:40.111926 | orchestrator | 2026-04-09 03:02:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:40.113964 | orchestrator | 2026-04-09 03:02:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:40.114121 | orchestrator | 2026-04-09 03:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:43.164464 | orchestrator | 2026-04-09 03:02:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:43.165379 | orchestrator | 2026-04-09 03:02:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:43.165506 | orchestrator | 2026-04-09 03:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:46.211401 | orchestrator | 2026-04-09 03:02:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:46.213014 | orchestrator | 2026-04-09 03:02:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:46.213106 | orchestrator | 2026-04-09 03:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:49.268581 | orchestrator | 2026-04-09 03:02:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:49.270421 | orchestrator | 2026-04-09 03:02:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:49.270475 | orchestrator | 2026-04-09 03:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:52.320150 | orchestrator | 2026-04-09 03:02:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:52.324771 | orchestrator | 2026-04-09 03:02:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:52.324890 | orchestrator | 2026-04-09 03:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:55.370290 | orchestrator | 2026-04-09 03:02:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:55.370636 | orchestrator | 2026-04-09 03:02:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:55.370661 | orchestrator | 2026-04-09 03:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:02:58.424180 | orchestrator | 2026-04-09 03:02:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:02:58.425805 | orchestrator | 2026-04-09 03:02:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:02:58.425940 | orchestrator | 2026-04-09 03:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:01.475003 | orchestrator | 2026-04-09 03:03:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:01.475182 | orchestrator | 2026-04-09 03:03:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:01.475200 | orchestrator | 2026-04-09 03:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:04.524810 | orchestrator | 2026-04-09 03:03:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:04.526108 | orchestrator | 2026-04-09 03:03:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:04.526225 | orchestrator | 2026-04-09 03:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:07.571475 | orchestrator | 2026-04-09 03:03:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:07.571897 | orchestrator | 2026-04-09 03:03:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:07.572198 | orchestrator | 2026-04-09 03:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:10.623477 | orchestrator | 2026-04-09 03:03:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:10.625170 | orchestrator | 2026-04-09 03:03:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:10.625235 | orchestrator | 2026-04-09 03:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:13.675977 | orchestrator | 2026-04-09 03:03:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:13.676707 | orchestrator | 2026-04-09 03:03:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:13.676727 | orchestrator | 2026-04-09 03:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:16.726083 | orchestrator | 2026-04-09 03:03:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:16.727206 | orchestrator | 2026-04-09 03:03:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:16.727240 | orchestrator | 2026-04-09 03:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:19.773805 | orchestrator | 2026-04-09 03:03:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:19.774792 | orchestrator | 2026-04-09 03:03:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:19.774829 | orchestrator | 2026-04-09 03:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:22.823073 | orchestrator | 2026-04-09 03:03:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:22.825525 | orchestrator | 2026-04-09 03:03:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:22.825601 | orchestrator | 2026-04-09 03:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:25.872307 | orchestrator | 2026-04-09 03:03:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:25.872537 | orchestrator | 2026-04-09 03:03:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:25.872564 | orchestrator | 2026-04-09 03:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:28.922161 | orchestrator | 2026-04-09 03:03:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:28.922347 | orchestrator | 2026-04-09 03:03:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:28.922366 | orchestrator | 2026-04-09 03:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:31.973911 | orchestrator | 2026-04-09 03:03:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:31.975129 | orchestrator | 2026-04-09 03:03:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:31.975159 | orchestrator | 2026-04-09 03:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:35.023048 | orchestrator | 2026-04-09 03:03:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:35.025544 | orchestrator | 2026-04-09 03:03:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:35.025610 | orchestrator | 2026-04-09 03:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:38.078710 | orchestrator | 2026-04-09 03:03:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:38.079642 | orchestrator | 2026-04-09 03:03:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:38.080351 | orchestrator | 2026-04-09 03:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:41.127645 | orchestrator | 2026-04-09 03:03:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:41.128963 | orchestrator | 2026-04-09 03:03:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:41.129065 | orchestrator | 2026-04-09 03:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:44.178713 | orchestrator | 2026-04-09 03:03:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:44.178844 | orchestrator | 2026-04-09 03:03:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:44.178873 | orchestrator | 2026-04-09 03:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:47.231401 | orchestrator | 2026-04-09 03:03:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:47.233216 | orchestrator | 2026-04-09 03:03:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:47.233284 | orchestrator | 2026-04-09 03:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:50.277554 | orchestrator | 2026-04-09 03:03:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:50.279133 | orchestrator | 2026-04-09 03:03:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:50.279206 | orchestrator | 2026-04-09 03:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:53.318837 | orchestrator | 2026-04-09 03:03:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:53.321445 | orchestrator | 2026-04-09 03:03:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:53.321528 | orchestrator | 2026-04-09 03:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:56.367421 | orchestrator | 2026-04-09 03:03:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:56.369695 | orchestrator | 2026-04-09 03:03:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:56.369718 | orchestrator | 2026-04-09 03:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:03:59.421693 | orchestrator | 2026-04-09 03:03:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:03:59.424015 | orchestrator | 2026-04-09 03:03:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:03:59.424090 | orchestrator | 2026-04-09 03:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:02.467762 | orchestrator | 2026-04-09 03:04:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:02.468641 | orchestrator | 2026-04-09 03:04:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:02.468725 | orchestrator | 2026-04-09 03:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:05.514776 | orchestrator | 2026-04-09 03:04:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:05.517089 | orchestrator | 2026-04-09 03:04:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:05.517176 | orchestrator | 2026-04-09 03:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:08.561747 | orchestrator | 2026-04-09 03:04:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:08.563657 | orchestrator | 2026-04-09 03:04:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:08.563817 | orchestrator | 2026-04-09 03:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:11.617843 | orchestrator | 2026-04-09 03:04:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:11.618511 | orchestrator | 2026-04-09 03:04:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:11.618549 | orchestrator | 2026-04-09 03:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:14.670497 | orchestrator | 2026-04-09 03:04:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:14.672490 | orchestrator | 2026-04-09 03:04:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:14.672580 | orchestrator | 2026-04-09 03:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:17.720400 | orchestrator | 2026-04-09 03:04:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:17.721490 | orchestrator | 2026-04-09 03:04:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:17.721525 | orchestrator | 2026-04-09 03:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:20.767190 | orchestrator | 2026-04-09 03:04:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:20.769112 | orchestrator | 2026-04-09 03:04:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:20.769176 | orchestrator | 2026-04-09 03:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:23.816737 | orchestrator | 2026-04-09 03:04:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:23.818233 | orchestrator | 2026-04-09 03:04:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:23.818308 | orchestrator | 2026-04-09 03:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:26.871711 | orchestrator | 2026-04-09 03:04:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:26.872928 | orchestrator | 2026-04-09 03:04:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:26.873038 | orchestrator | 2026-04-09 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:29.925465 | orchestrator | 2026-04-09 03:04:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:29.926675 | orchestrator | 2026-04-09 03:04:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:29.926724 | orchestrator | 2026-04-09 03:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:32.972718 | orchestrator | 2026-04-09 03:04:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:32.973490 | orchestrator | 2026-04-09 03:04:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:32.973525 | orchestrator | 2026-04-09 03:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:04:36.018507 | orchestrator | 2026-04-09 03:04:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:04:36.020094 | orchestrator | 2026-04-09 03:04:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:04:36.020131 | orchestrator | 2026-04-09 03:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:39.168143 | orchestrator | 2026-04-09 03:06:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:39.168971 | orchestrator | 2026-04-09 03:06:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:39.169033 | orchestrator | 2026-04-09 03:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:42.220473 | orchestrator | 2026-04-09 03:06:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:42.221964 | orchestrator | 2026-04-09 03:06:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:42.222116 | orchestrator | 2026-04-09 03:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:45.272039 | orchestrator | 2026-04-09 03:06:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:45.274097 | orchestrator | 2026-04-09 03:06:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:45.274144 | orchestrator | 2026-04-09 03:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:48.319359 | orchestrator | 2026-04-09 03:06:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:48.320480 | orchestrator | 2026-04-09 03:06:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:48.320522 | orchestrator | 2026-04-09 03:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:51.373088 | orchestrator | 2026-04-09 03:06:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:51.374907 | orchestrator | 2026-04-09 03:06:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:51.375284 | orchestrator | 2026-04-09 03:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:54.419280 | orchestrator | 2026-04-09 03:06:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:54.421007 | orchestrator | 2026-04-09 03:06:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:54.421114 | orchestrator | 2026-04-09 03:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:06:57.463957 | orchestrator | 2026-04-09 03:06:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:06:57.465918 | orchestrator | 2026-04-09 03:06:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:06:57.466141 | orchestrator | 2026-04-09 03:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:00.508988 | orchestrator | 2026-04-09 03:07:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:00.509532 | orchestrator | 2026-04-09 03:07:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:00.509605 | orchestrator | 2026-04-09 03:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:03.550565 | orchestrator | 2026-04-09 03:07:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:03.551448 | orchestrator | 2026-04-09 03:07:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:03.551482 | orchestrator | 2026-04-09 03:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:06.599778 | orchestrator | 2026-04-09 03:07:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:06.601926 | orchestrator | 2026-04-09 03:07:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:06.602194 | orchestrator | 2026-04-09 03:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:09.646666 | orchestrator | 2026-04-09 03:07:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:09.648158 | orchestrator | 2026-04-09 03:07:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:09.648195 | orchestrator | 2026-04-09 03:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:12.695865 | orchestrator | 2026-04-09 03:07:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:12.696252 | orchestrator | 2026-04-09 03:07:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:12.696281 | orchestrator | 2026-04-09 03:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:15.743937 | orchestrator | 2026-04-09 03:07:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:15.746717 | orchestrator | 2026-04-09 03:07:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:15.747006 | orchestrator | 2026-04-09 03:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:18.793979 | orchestrator | 2026-04-09 03:07:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:18.796125 | orchestrator | 2026-04-09 03:07:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:18.796200 | orchestrator | 2026-04-09 03:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:21.848154 | orchestrator | 2026-04-09 03:07:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:21.850751 | orchestrator | 2026-04-09 03:07:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:21.850808 | orchestrator | 2026-04-09 03:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:24.891835 | orchestrator | 2026-04-09 03:07:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:24.892995 | orchestrator | 2026-04-09 03:07:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:24.893062 | orchestrator | 2026-04-09 03:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:27.938723 | orchestrator | 2026-04-09 03:07:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:27.939211 | orchestrator | 2026-04-09 03:07:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:27.939409 | orchestrator | 2026-04-09 03:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:30.981957 | orchestrator | 2026-04-09 03:07:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:30.984877 | orchestrator | 2026-04-09 03:07:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:30.984940 | orchestrator | 2026-04-09 03:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:34.034348 | orchestrator | 2026-04-09 03:07:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:34.038315 | orchestrator | 2026-04-09 03:07:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:34.038444 | orchestrator | 2026-04-09 03:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:37.084237 | orchestrator | 2026-04-09 03:07:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:37.087241 | orchestrator | 2026-04-09 03:07:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:37.087350 | orchestrator | 2026-04-09 03:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:40.133169 | orchestrator | 2026-04-09 03:07:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:40.134564 | orchestrator | 2026-04-09 03:07:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:40.134695 | orchestrator | 2026-04-09 03:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:43.176433 | orchestrator | 2026-04-09 03:07:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:43.178283 | orchestrator | 2026-04-09 03:07:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:43.178360 | orchestrator | 2026-04-09 03:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:46.217868 | orchestrator | 2026-04-09 03:07:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:46.219587 | orchestrator | 2026-04-09 03:07:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:46.219711 | orchestrator | 2026-04-09 03:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:49.262949 | orchestrator | 2026-04-09 03:07:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:49.264230 | orchestrator | 2026-04-09 03:07:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:49.264285 | orchestrator | 2026-04-09 03:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:52.307175 | orchestrator | 2026-04-09 03:07:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:52.309823 | orchestrator | 2026-04-09 03:07:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:52.309894 | orchestrator | 2026-04-09 03:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:55.351995 | orchestrator | 2026-04-09 03:07:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:55.353135 | orchestrator | 2026-04-09 03:07:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:55.353190 | orchestrator | 2026-04-09 03:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:07:58.395247 | orchestrator | 2026-04-09 03:07:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:07:58.396935 | orchestrator | 2026-04-09 03:07:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:07:58.396971 | orchestrator | 2026-04-09 03:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:01.446788 | orchestrator | 2026-04-09 03:08:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:01.448431 | orchestrator | 2026-04-09 03:08:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:01.448483 | orchestrator | 2026-04-09 03:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:04.490770 | orchestrator | 2026-04-09 03:08:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:04.493166 | orchestrator | 2026-04-09 03:08:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:04.493213 | orchestrator | 2026-04-09 03:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:07.542555 | orchestrator | 2026-04-09 03:08:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:07.544851 | orchestrator | 2026-04-09 03:08:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:07.545012 | orchestrator | 2026-04-09 03:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:10.583974 | orchestrator | 2026-04-09 03:08:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:10.584693 | orchestrator | 2026-04-09 03:08:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:10.584718 | orchestrator | 2026-04-09 03:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:13.627172 | orchestrator | 2026-04-09 03:08:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:13.628730 | orchestrator | 2026-04-09 03:08:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:13.628791 | orchestrator | 2026-04-09 03:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:16.681937 | orchestrator | 2026-04-09 03:08:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:16.684355 | orchestrator | 2026-04-09 03:08:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:16.684527 | orchestrator | 2026-04-09 03:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:19.725313 | orchestrator | 2026-04-09 03:08:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:19.725552 | orchestrator | 2026-04-09 03:08:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:19.725695 | orchestrator | 2026-04-09 03:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:22.771200 | orchestrator | 2026-04-09 03:08:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:22.772757 | orchestrator | 2026-04-09 03:08:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:22.772914 | orchestrator | 2026-04-09 03:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:25.823699 | orchestrator | 2026-04-09 03:08:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:25.825302 | orchestrator | 2026-04-09 03:08:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:25.825375 | orchestrator | 2026-04-09 03:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:28.870209 | orchestrator | 2026-04-09 03:08:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:28.872368 | orchestrator | 2026-04-09 03:08:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:28.872413 | orchestrator | 2026-04-09 03:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:31.925868 | orchestrator | 2026-04-09 03:08:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:31.926240 | orchestrator | 2026-04-09 03:08:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:31.926371 | orchestrator | 2026-04-09 03:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:34.971796 | orchestrator | 2026-04-09 03:08:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:34.973166 | orchestrator | 2026-04-09 03:08:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:34.973228 | orchestrator | 2026-04-09 03:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:38.022649 | orchestrator | 2026-04-09 03:08:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:38.022754 | orchestrator | 2026-04-09 03:08:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:38.022764 | orchestrator | 2026-04-09 03:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:41.082134 | orchestrator | 2026-04-09 03:08:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:41.082404 | orchestrator | 2026-04-09 03:08:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:41.082418 | orchestrator | 2026-04-09 03:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:44.126675 | orchestrator | 2026-04-09 03:08:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:44.127039 | orchestrator | 2026-04-09 03:08:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:44.127065 | orchestrator | 2026-04-09 03:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:47.174704 | orchestrator | 2026-04-09 03:08:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:47.177270 | orchestrator | 2026-04-09 03:08:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:47.177324 | orchestrator | 2026-04-09 03:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:50.226735 | orchestrator | 2026-04-09 03:08:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:50.230302 | orchestrator | 2026-04-09 03:08:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:50.230421 | orchestrator | 2026-04-09 03:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:53.282294 | orchestrator | 2026-04-09 03:08:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:53.283316 | orchestrator | 2026-04-09 03:08:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:53.283405 | orchestrator | 2026-04-09 03:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:56.323217 | orchestrator | 2026-04-09 03:08:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:56.324581 | orchestrator | 2026-04-09 03:08:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:56.324671 | orchestrator | 2026-04-09 03:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:08:59.365327 | orchestrator | 2026-04-09 03:08:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:08:59.367687 | orchestrator | 2026-04-09 03:08:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:08:59.367764 | orchestrator | 2026-04-09 03:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:02.417290 | orchestrator | 2026-04-09 03:09:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:02.420372 | orchestrator | 2026-04-09 03:09:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:02.420433 | orchestrator | 2026-04-09 03:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:05.462970 | orchestrator | 2026-04-09 03:09:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:05.465584 | orchestrator | 2026-04-09 03:09:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:05.465634 | orchestrator | 2026-04-09 03:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:08.514140 | orchestrator | 2026-04-09 03:09:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:08.515667 | orchestrator | 2026-04-09 03:09:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:08.515727 | orchestrator | 2026-04-09 03:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:11.560056 | orchestrator | 2026-04-09 03:09:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:11.560242 | orchestrator | 2026-04-09 03:09:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:11.560417 | orchestrator | 2026-04-09 03:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:14.605878 | orchestrator | 2026-04-09 03:09:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:14.608382 | orchestrator | 2026-04-09 03:09:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:14.608454 | orchestrator | 2026-04-09 03:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:17.653898 | orchestrator | 2026-04-09 03:09:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:17.655081 | orchestrator | 2026-04-09 03:09:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:17.655170 | orchestrator | 2026-04-09 03:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:20.692065 | orchestrator | 2026-04-09 03:09:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:20.693169 | orchestrator | 2026-04-09 03:09:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:20.693384 | orchestrator | 2026-04-09 03:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:23.734796 | orchestrator | 2026-04-09 03:09:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:23.736918 | orchestrator | 2026-04-09 03:09:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:23.737055 | orchestrator | 2026-04-09 03:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:26.774688 | orchestrator | 2026-04-09 03:09:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:26.776082 | orchestrator | 2026-04-09 03:09:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:26.776170 | orchestrator | 2026-04-09 03:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:29.821561 | orchestrator | 2026-04-09 03:09:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:29.823207 | orchestrator | 2026-04-09 03:09:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:29.823250 | orchestrator | 2026-04-09 03:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:32.862655 | orchestrator | 2026-04-09 03:09:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:32.864756 | orchestrator | 2026-04-09 03:09:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:32.864813 | orchestrator | 2026-04-09 03:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:35.900472 | orchestrator | 2026-04-09 03:09:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:35.900751 | orchestrator | 2026-04-09 03:09:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:35.900777 | orchestrator | 2026-04-09 03:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:38.942242 | orchestrator | 2026-04-09 03:09:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:38.945170 | orchestrator | 2026-04-09 03:09:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:38.945230 | orchestrator | 2026-04-09 03:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:41.989892 | orchestrator | 2026-04-09 03:09:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:41.993768 | orchestrator | 2026-04-09 03:09:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:41.993865 | orchestrator | 2026-04-09 03:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:45.046351 | orchestrator | 2026-04-09 03:09:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:45.048194 | orchestrator | 2026-04-09 03:09:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:45.048274 | orchestrator | 2026-04-09 03:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:48.099582 | orchestrator | 2026-04-09 03:09:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:48.100908 | orchestrator | 2026-04-09 03:09:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:48.100953 | orchestrator | 2026-04-09 03:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:51.147036 | orchestrator | 2026-04-09 03:09:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:51.147125 | orchestrator | 2026-04-09 03:09:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:51.147135 | orchestrator | 2026-04-09 03:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:54.183070 | orchestrator | 2026-04-09 03:09:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:54.184782 | orchestrator | 2026-04-09 03:09:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:54.184835 | orchestrator | 2026-04-09 03:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:09:57.223554 | orchestrator | 2026-04-09 03:09:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:09:57.225317 | orchestrator | 2026-04-09 03:09:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:09:57.225424 | orchestrator | 2026-04-09 03:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:00.265254 | orchestrator | 2026-04-09 03:10:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:00.266253 | orchestrator | 2026-04-09 03:10:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:00.266288 | orchestrator | 2026-04-09 03:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:03.320016 | orchestrator | 2026-04-09 03:10:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:03.321637 | orchestrator | 2026-04-09 03:10:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:03.321696 | orchestrator | 2026-04-09 03:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:06.364065 | orchestrator | 2026-04-09 03:10:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:06.364819 | orchestrator | 2026-04-09 03:10:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:06.365356 | orchestrator | 2026-04-09 03:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:09.413098 | orchestrator | 2026-04-09 03:10:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:09.416670 | orchestrator | 2026-04-09 03:10:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:09.416748 | orchestrator | 2026-04-09 03:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:12.464064 | orchestrator | 2026-04-09 03:10:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:12.465321 | orchestrator | 2026-04-09 03:10:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:12.465867 | orchestrator | 2026-04-09 03:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:15.516630 | orchestrator | 2026-04-09 03:10:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:15.520478 | orchestrator | 2026-04-09 03:10:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:15.520573 | orchestrator | 2026-04-09 03:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:18.567066 | orchestrator | 2026-04-09 03:10:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:18.570421 | orchestrator | 2026-04-09 03:10:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:18.570567 | orchestrator | 2026-04-09 03:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:21.622295 | orchestrator | 2026-04-09 03:10:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:21.625030 | orchestrator | 2026-04-09 03:10:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:21.625121 | orchestrator | 2026-04-09 03:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:24.666916 | orchestrator | 2026-04-09 03:10:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:24.668914 | orchestrator | 2026-04-09 03:10:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:24.668968 | orchestrator | 2026-04-09 03:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:27.706248 | orchestrator | 2026-04-09 03:10:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:27.707160 | orchestrator | 2026-04-09 03:10:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:27.707192 | orchestrator | 2026-04-09 03:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:30.747914 | orchestrator | 2026-04-09 03:10:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:30.749367 | orchestrator | 2026-04-09 03:10:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:30.749481 | orchestrator | 2026-04-09 03:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:33.795784 | orchestrator | 2026-04-09 03:10:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:33.796804 | orchestrator | 2026-04-09 03:10:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:33.796907 | orchestrator | 2026-04-09 03:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:36.836883 | orchestrator | 2026-04-09 03:10:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:36.839140 | orchestrator | 2026-04-09 03:10:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:36.839194 | orchestrator | 2026-04-09 03:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:39.882127 | orchestrator | 2026-04-09 03:10:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:39.883564 | orchestrator | 2026-04-09 03:10:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:39.883625 | orchestrator | 2026-04-09 03:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:42.922188 | orchestrator | 2026-04-09 03:10:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:42.923122 | orchestrator | 2026-04-09 03:10:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:42.923156 | orchestrator | 2026-04-09 03:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:45.968515 | orchestrator | 2026-04-09 03:10:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:45.970293 | orchestrator | 2026-04-09 03:10:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:45.970352 | orchestrator | 2026-04-09 03:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:49.023847 | orchestrator | 2026-04-09 03:10:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:49.025584 | orchestrator | 2026-04-09 03:10:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:49.025662 | orchestrator | 2026-04-09 03:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:52.068870 | orchestrator | 2026-04-09 03:10:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:52.070482 | orchestrator | 2026-04-09 03:10:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:52.071351 | orchestrator | 2026-04-09 03:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:55.116490 | orchestrator | 2026-04-09 03:10:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:55.118958 | orchestrator | 2026-04-09 03:10:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:55.119071 | orchestrator | 2026-04-09 03:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:10:58.163755 | orchestrator | 2026-04-09 03:10:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:10:58.165727 | orchestrator | 2026-04-09 03:10:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:10:58.165798 | orchestrator | 2026-04-09 03:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:01.214878 | orchestrator | 2026-04-09 03:11:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:01.216219 | orchestrator | 2026-04-09 03:11:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:01.216318 | orchestrator | 2026-04-09 03:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:04.261544 | orchestrator | 2026-04-09 03:11:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:04.263579 | orchestrator | 2026-04-09 03:11:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:04.263637 | orchestrator | 2026-04-09 03:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:07.316175 | orchestrator | 2026-04-09 03:11:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:07.316271 | orchestrator | 2026-04-09 03:11:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:07.316284 | orchestrator | 2026-04-09 03:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:10.358197 | orchestrator | 2026-04-09 03:11:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:10.360065 | orchestrator | 2026-04-09 03:11:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:10.360109 | orchestrator | 2026-04-09 03:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:13.407078 | orchestrator | 2026-04-09 03:11:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:13.409480 | orchestrator | 2026-04-09 03:11:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:13.409524 | orchestrator | 2026-04-09 03:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:16.457684 | orchestrator | 2026-04-09 03:11:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:16.459297 | orchestrator | 2026-04-09 03:11:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:16.459352 | orchestrator | 2026-04-09 03:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:19.508587 | orchestrator | 2026-04-09 03:11:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:19.508796 | orchestrator | 2026-04-09 03:11:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:19.508821 | orchestrator | 2026-04-09 03:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:22.557617 | orchestrator | 2026-04-09 03:11:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:22.558906 | orchestrator | 2026-04-09 03:11:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:22.558969 | orchestrator | 2026-04-09 03:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:25.601943 | orchestrator | 2026-04-09 03:11:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:25.604591 | orchestrator | 2026-04-09 03:11:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:25.604974 | orchestrator | 2026-04-09 03:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:28.651896 | orchestrator | 2026-04-09 03:11:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:28.652163 | orchestrator | 2026-04-09 03:11:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:28.653200 | orchestrator | 2026-04-09 03:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:31.690208 | orchestrator | 2026-04-09 03:11:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:31.692989 | orchestrator | 2026-04-09 03:11:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:31.693043 | orchestrator | 2026-04-09 03:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:34.745196 | orchestrator | 2026-04-09 03:11:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:34.746670 | orchestrator | 2026-04-09 03:11:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:34.746714 | orchestrator | 2026-04-09 03:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:37.792240 | orchestrator | 2026-04-09 03:11:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:37.794975 | orchestrator | 2026-04-09 03:11:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:37.795043 | orchestrator | 2026-04-09 03:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:40.846635 | orchestrator | 2026-04-09 03:11:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:40.849699 | orchestrator | 2026-04-09 03:11:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:40.849766 | orchestrator | 2026-04-09 03:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:43.903434 | orchestrator | 2026-04-09 03:11:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:43.903629 | orchestrator | 2026-04-09 03:11:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:43.903647 | orchestrator | 2026-04-09 03:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:46.954760 | orchestrator | 2026-04-09 03:11:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:46.956815 | orchestrator | 2026-04-09 03:11:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:46.956968 | orchestrator | 2026-04-09 03:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:50.006177 | orchestrator | 2026-04-09 03:11:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:50.006486 | orchestrator | 2026-04-09 03:11:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:50.006510 | orchestrator | 2026-04-09 03:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:53.058556 | orchestrator | 2026-04-09 03:11:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:53.060876 | orchestrator | 2026-04-09 03:11:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:53.061019 | orchestrator | 2026-04-09 03:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:56.102234 | orchestrator | 2026-04-09 03:11:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:56.103785 | orchestrator | 2026-04-09 03:11:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:56.103857 | orchestrator | 2026-04-09 03:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:11:59.146984 | orchestrator | 2026-04-09 03:11:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:11:59.149181 | orchestrator | 2026-04-09 03:11:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:11:59.149228 | orchestrator | 2026-04-09 03:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:02.202532 | orchestrator | 2026-04-09 03:12:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:02.203567 | orchestrator | 2026-04-09 03:12:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:02.203980 | orchestrator | 2026-04-09 03:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:05.261095 | orchestrator | 2026-04-09 03:12:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:05.263282 | orchestrator | 2026-04-09 03:12:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:05.263365 | orchestrator | 2026-04-09 03:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:08.311556 | orchestrator | 2026-04-09 03:12:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:08.312745 | orchestrator | 2026-04-09 03:12:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:08.312780 | orchestrator | 2026-04-09 03:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:11.356100 | orchestrator | 2026-04-09 03:12:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:11.358255 | orchestrator | 2026-04-09 03:12:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:11.358372 | orchestrator | 2026-04-09 03:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:14.408086 | orchestrator | 2026-04-09 03:12:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:14.409060 | orchestrator | 2026-04-09 03:12:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:14.409121 | orchestrator | 2026-04-09 03:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:17.454531 | orchestrator | 2026-04-09 03:12:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:17.455159 | orchestrator | 2026-04-09 03:12:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:17.455250 | orchestrator | 2026-04-09 03:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:20.500943 | orchestrator | 2026-04-09 03:12:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:20.501764 | orchestrator | 2026-04-09 03:12:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:20.501809 | orchestrator | 2026-04-09 03:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:23.548381 | orchestrator | 2026-04-09 03:12:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:23.550407 | orchestrator | 2026-04-09 03:12:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:23.550831 | orchestrator | 2026-04-09 03:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:26.595527 | orchestrator | 2026-04-09 03:12:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:26.597703 | orchestrator | 2026-04-09 03:12:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:26.597770 | orchestrator | 2026-04-09 03:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:29.642790 | orchestrator | 2026-04-09 03:12:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:29.643872 | orchestrator | 2026-04-09 03:12:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:29.643915 | orchestrator | 2026-04-09 03:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:32.680050 | orchestrator | 2026-04-09 03:12:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:32.682107 | orchestrator | 2026-04-09 03:12:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:32.682200 | orchestrator | 2026-04-09 03:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:35.733848 | orchestrator | 2026-04-09 03:12:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:35.734855 | orchestrator | 2026-04-09 03:12:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:35.734941 | orchestrator | 2026-04-09 03:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:38.781604 | orchestrator | 2026-04-09 03:12:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:38.781689 | orchestrator | 2026-04-09 03:12:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:38.781699 | orchestrator | 2026-04-09 03:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:41.830536 | orchestrator | 2026-04-09 03:12:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:41.831369 | orchestrator | 2026-04-09 03:12:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:41.831816 | orchestrator | 2026-04-09 03:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:44.884175 | orchestrator | 2026-04-09 03:12:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:44.886720 | orchestrator | 2026-04-09 03:12:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:44.886784 | orchestrator | 2026-04-09 03:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:47.935030 | orchestrator | 2026-04-09 03:12:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:47.937451 | orchestrator | 2026-04-09 03:12:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:47.937625 | orchestrator | 2026-04-09 03:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:50.980024 | orchestrator | 2026-04-09 03:12:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:50.980195 | orchestrator | 2026-04-09 03:12:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:50.980211 | orchestrator | 2026-04-09 03:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:54.035073 | orchestrator | 2026-04-09 03:12:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:54.036636 | orchestrator | 2026-04-09 03:12:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:54.036703 | orchestrator | 2026-04-09 03:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:12:57.085488 | orchestrator | 2026-04-09 03:12:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:12:57.086846 | orchestrator | 2026-04-09 03:12:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:12:57.086898 | orchestrator | 2026-04-09 03:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:00.136235 | orchestrator | 2026-04-09 03:13:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:00.138194 | orchestrator | 2026-04-09 03:13:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:00.138252 | orchestrator | 2026-04-09 03:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:03.189081 | orchestrator | 2026-04-09 03:13:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:03.190674 | orchestrator | 2026-04-09 03:13:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:03.190726 | orchestrator | 2026-04-09 03:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:06.246064 | orchestrator | 2026-04-09 03:13:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:06.246794 | orchestrator | 2026-04-09 03:13:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:06.246831 | orchestrator | 2026-04-09 03:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:09.302450 | orchestrator | 2026-04-09 03:13:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:09.302817 | orchestrator | 2026-04-09 03:13:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:09.302959 | orchestrator | 2026-04-09 03:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:12.350623 | orchestrator | 2026-04-09 03:13:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:12.351545 | orchestrator | 2026-04-09 03:13:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:12.351584 | orchestrator | 2026-04-09 03:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:15.398920 | orchestrator | 2026-04-09 03:13:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:15.399751 | orchestrator | 2026-04-09 03:13:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:15.399802 | orchestrator | 2026-04-09 03:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:18.455077 | orchestrator | 2026-04-09 03:13:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:18.458151 | orchestrator | 2026-04-09 03:13:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:18.458219 | orchestrator | 2026-04-09 03:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:21.508529 | orchestrator | 2026-04-09 03:13:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:21.510516 | orchestrator | 2026-04-09 03:13:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:21.510596 | orchestrator | 2026-04-09 03:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:24.562879 | orchestrator | 2026-04-09 03:13:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:24.564520 | orchestrator | 2026-04-09 03:13:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:24.564644 | orchestrator | 2026-04-09 03:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:27.613622 | orchestrator | 2026-04-09 03:13:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:27.616048 | orchestrator | 2026-04-09 03:13:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:27.616291 | orchestrator | 2026-04-09 03:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:30.662912 | orchestrator | 2026-04-09 03:13:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:30.663645 | orchestrator | 2026-04-09 03:13:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:30.663689 | orchestrator | 2026-04-09 03:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:33.713369 | orchestrator | 2026-04-09 03:13:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:33.714399 | orchestrator | 2026-04-09 03:13:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:33.714537 | orchestrator | 2026-04-09 03:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:36.761160 | orchestrator | 2026-04-09 03:13:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:36.763531 | orchestrator | 2026-04-09 03:13:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:36.763592 | orchestrator | 2026-04-09 03:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:39.814907 | orchestrator | 2026-04-09 03:13:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:39.816511 | orchestrator | 2026-04-09 03:13:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:39.816548 | orchestrator | 2026-04-09 03:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:42.870878 | orchestrator | 2026-04-09 03:13:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:42.875711 | orchestrator | 2026-04-09 03:13:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:42.875800 | orchestrator | 2026-04-09 03:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:45.925317 | orchestrator | 2026-04-09 03:13:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:45.926963 | orchestrator | 2026-04-09 03:13:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:45.927101 | orchestrator | 2026-04-09 03:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:48.976751 | orchestrator | 2026-04-09 03:13:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:48.977764 | orchestrator | 2026-04-09 03:13:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:48.977807 | orchestrator | 2026-04-09 03:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:52.028340 | orchestrator | 2026-04-09 03:13:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:52.029879 | orchestrator | 2026-04-09 03:13:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:52.030131 | orchestrator | 2026-04-09 03:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:55.084448 | orchestrator | 2026-04-09 03:13:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:55.086010 | orchestrator | 2026-04-09 03:13:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:55.086237 | orchestrator | 2026-04-09 03:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:13:58.134844 | orchestrator | 2026-04-09 03:13:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:13:58.136191 | orchestrator | 2026-04-09 03:13:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:13:58.136279 | orchestrator | 2026-04-09 03:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:01.185445 | orchestrator | 2026-04-09 03:14:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:01.187103 | orchestrator | 2026-04-09 03:14:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:01.187152 | orchestrator | 2026-04-09 03:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:04.232743 | orchestrator | 2026-04-09 03:14:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:04.232825 | orchestrator | 2026-04-09 03:14:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:04.232836 | orchestrator | 2026-04-09 03:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:07.277860 | orchestrator | 2026-04-09 03:14:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:07.281005 | orchestrator | 2026-04-09 03:14:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:07.281081 | orchestrator | 2026-04-09 03:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:10.337587 | orchestrator | 2026-04-09 03:14:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:10.340339 | orchestrator | 2026-04-09 03:14:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:10.340408 | orchestrator | 2026-04-09 03:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:13.388829 | orchestrator | 2026-04-09 03:14:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:13.390399 | orchestrator | 2026-04-09 03:14:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:13.390527 | orchestrator | 2026-04-09 03:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:16.435926 | orchestrator | 2026-04-09 03:14:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:16.436564 | orchestrator | 2026-04-09 03:14:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:16.436663 | orchestrator | 2026-04-09 03:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:19.485901 | orchestrator | 2026-04-09 03:14:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:19.488645 | orchestrator | 2026-04-09 03:14:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:19.488733 | orchestrator | 2026-04-09 03:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:22.533530 | orchestrator | 2026-04-09 03:14:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:22.534863 | orchestrator | 2026-04-09 03:14:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:22.534967 | orchestrator | 2026-04-09 03:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:25.575988 | orchestrator | 2026-04-09 03:14:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:25.578544 | orchestrator | 2026-04-09 03:14:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:25.578627 | orchestrator | 2026-04-09 03:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:28.630000 | orchestrator | 2026-04-09 03:14:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:28.632149 | orchestrator | 2026-04-09 03:14:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:28.632346 | orchestrator | 2026-04-09 03:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:31.681819 | orchestrator | 2026-04-09 03:14:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:31.684089 | orchestrator | 2026-04-09 03:14:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:31.684231 | orchestrator | 2026-04-09 03:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:34.736728 | orchestrator | 2026-04-09 03:14:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:34.738441 | orchestrator | 2026-04-09 03:14:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:34.738521 | orchestrator | 2026-04-09 03:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:37.785347 | orchestrator | 2026-04-09 03:14:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:37.787914 | orchestrator | 2026-04-09 03:14:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:37.787993 | orchestrator | 2026-04-09 03:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:40.835208 | orchestrator | 2026-04-09 03:14:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:40.838733 | orchestrator | 2026-04-09 03:14:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:40.838795 | orchestrator | 2026-04-09 03:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:43.890419 | orchestrator | 2026-04-09 03:14:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:43.890922 | orchestrator | 2026-04-09 03:14:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:43.891024 | orchestrator | 2026-04-09 03:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:46.955507 | orchestrator | 2026-04-09 03:14:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:46.956188 | orchestrator | 2026-04-09 03:14:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:46.956231 | orchestrator | 2026-04-09 03:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:50.008327 | orchestrator | 2026-04-09 03:14:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:50.009288 | orchestrator | 2026-04-09 03:14:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:50.009373 | orchestrator | 2026-04-09 03:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:53.061816 | orchestrator | 2026-04-09 03:14:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:53.062720 | orchestrator | 2026-04-09 03:14:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:53.062814 | orchestrator | 2026-04-09 03:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:56.113251 | orchestrator | 2026-04-09 03:14:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:56.116476 | orchestrator | 2026-04-09 03:14:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:56.116584 | orchestrator | 2026-04-09 03:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:14:59.161583 | orchestrator | 2026-04-09 03:14:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:14:59.162536 | orchestrator | 2026-04-09 03:14:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:14:59.162576 | orchestrator | 2026-04-09 03:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:02.218367 | orchestrator | 2026-04-09 03:15:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:02.219375 | orchestrator | 2026-04-09 03:15:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:02.219493 | orchestrator | 2026-04-09 03:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:05.269980 | orchestrator | 2026-04-09 03:15:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:05.271346 | orchestrator | 2026-04-09 03:15:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:05.271405 | orchestrator | 2026-04-09 03:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:08.326537 | orchestrator | 2026-04-09 03:15:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:08.327016 | orchestrator | 2026-04-09 03:15:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:08.327053 | orchestrator | 2026-04-09 03:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:11.379732 | orchestrator | 2026-04-09 03:15:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:11.380665 | orchestrator | 2026-04-09 03:15:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:11.380696 | orchestrator | 2026-04-09 03:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:14.437600 | orchestrator | 2026-04-09 03:15:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:14.440378 | orchestrator | 2026-04-09 03:15:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:14.440551 | orchestrator | 2026-04-09 03:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:17.491657 | orchestrator | 2026-04-09 03:15:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:17.492525 | orchestrator | 2026-04-09 03:15:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:17.492541 | orchestrator | 2026-04-09 03:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:20.539737 | orchestrator | 2026-04-09 03:15:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:20.539969 | orchestrator | 2026-04-09 03:15:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:20.540300 | orchestrator | 2026-04-09 03:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:23.587352 | orchestrator | 2026-04-09 03:15:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:23.589189 | orchestrator | 2026-04-09 03:15:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:23.589254 | orchestrator | 2026-04-09 03:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:26.640914 | orchestrator | 2026-04-09 03:15:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:26.642767 | orchestrator | 2026-04-09 03:15:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:26.642822 | orchestrator | 2026-04-09 03:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:29.694081 | orchestrator | 2026-04-09 03:15:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:29.695482 | orchestrator | 2026-04-09 03:15:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:29.695526 | orchestrator | 2026-04-09 03:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:32.743331 | orchestrator | 2026-04-09 03:15:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:32.743420 | orchestrator | 2026-04-09 03:15:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:32.743430 | orchestrator | 2026-04-09 03:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:35.791798 | orchestrator | 2026-04-09 03:15:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:35.792759 | orchestrator | 2026-04-09 03:15:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:35.792813 | orchestrator | 2026-04-09 03:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:38.833406 | orchestrator | 2026-04-09 03:15:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:38.834359 | orchestrator | 2026-04-09 03:15:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:38.834536 | orchestrator | 2026-04-09 03:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:41.884857 | orchestrator | 2026-04-09 03:15:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:41.886623 | orchestrator | 2026-04-09 03:15:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:41.886681 | orchestrator | 2026-04-09 03:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:44.926393 | orchestrator | 2026-04-09 03:15:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:44.929054 | orchestrator | 2026-04-09 03:15:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:44.929210 | orchestrator | 2026-04-09 03:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:47.978320 | orchestrator | 2026-04-09 03:15:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:47.980810 | orchestrator | 2026-04-09 03:15:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:47.980857 | orchestrator | 2026-04-09 03:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:51.023787 | orchestrator | 2026-04-09 03:15:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:51.023900 | orchestrator | 2026-04-09 03:15:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:51.023911 | orchestrator | 2026-04-09 03:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:54.070855 | orchestrator | 2026-04-09 03:15:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:54.073059 | orchestrator | 2026-04-09 03:15:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:54.073168 | orchestrator | 2026-04-09 03:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:15:57.127276 | orchestrator | 2026-04-09 03:15:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:15:57.129380 | orchestrator | 2026-04-09 03:15:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:15:57.129451 | orchestrator | 2026-04-09 03:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:00.179350 | orchestrator | 2026-04-09 03:16:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:00.181914 | orchestrator | 2026-04-09 03:16:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:00.181979 | orchestrator | 2026-04-09 03:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:03.233813 | orchestrator | 2026-04-09 03:16:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:03.235752 | orchestrator | 2026-04-09 03:16:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:03.235784 | orchestrator | 2026-04-09 03:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:06.285291 | orchestrator | 2026-04-09 03:16:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:06.286339 | orchestrator | 2026-04-09 03:16:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:06.286421 | orchestrator | 2026-04-09 03:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:09.342912 | orchestrator | 2026-04-09 03:16:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:09.345099 | orchestrator | 2026-04-09 03:16:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:09.345157 | orchestrator | 2026-04-09 03:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:12.407773 | orchestrator | 2026-04-09 03:16:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:12.410549 | orchestrator | 2026-04-09 03:16:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:12.410708 | orchestrator | 2026-04-09 03:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:15.450925 | orchestrator | 2026-04-09 03:16:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:15.452323 | orchestrator | 2026-04-09 03:16:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:15.452372 | orchestrator | 2026-04-09 03:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:18.505780 | orchestrator | 2026-04-09 03:16:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:18.506163 | orchestrator | 2026-04-09 03:16:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:18.506195 | orchestrator | 2026-04-09 03:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:21.556354 | orchestrator | 2026-04-09 03:16:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:21.557580 | orchestrator | 2026-04-09 03:16:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:21.557632 | orchestrator | 2026-04-09 03:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:24.604443 | orchestrator | 2026-04-09 03:16:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:24.606197 | orchestrator | 2026-04-09 03:16:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:24.606331 | orchestrator | 2026-04-09 03:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:27.659841 | orchestrator | 2026-04-09 03:16:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:27.661250 | orchestrator | 2026-04-09 03:16:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:27.661301 | orchestrator | 2026-04-09 03:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:30.705081 | orchestrator | 2026-04-09 03:16:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:30.707490 | orchestrator | 2026-04-09 03:16:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:30.707522 | orchestrator | 2026-04-09 03:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:33.757016 | orchestrator | 2026-04-09 03:16:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:33.758981 | orchestrator | 2026-04-09 03:16:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:33.759035 | orchestrator | 2026-04-09 03:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:36.810795 | orchestrator | 2026-04-09 03:16:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:36.814413 | orchestrator | 2026-04-09 03:16:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:36.814514 | orchestrator | 2026-04-09 03:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:39.861916 | orchestrator | 2026-04-09 03:16:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:39.862822 | orchestrator | 2026-04-09 03:16:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:39.863047 | orchestrator | 2026-04-09 03:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:42.905433 | orchestrator | 2026-04-09 03:16:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:42.905870 | orchestrator | 2026-04-09 03:16:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:42.905900 | orchestrator | 2026-04-09 03:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:45.959679 | orchestrator | 2026-04-09 03:16:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:45.959760 | orchestrator | 2026-04-09 03:16:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:45.959770 | orchestrator | 2026-04-09 03:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:49.008557 | orchestrator | 2026-04-09 03:16:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:49.010189 | orchestrator | 2026-04-09 03:16:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:49.010332 | orchestrator | 2026-04-09 03:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:52.060787 | orchestrator | 2026-04-09 03:16:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:52.063512 | orchestrator | 2026-04-09 03:16:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:52.063583 | orchestrator | 2026-04-09 03:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:55.106009 | orchestrator | 2026-04-09 03:16:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:55.107740 | orchestrator | 2026-04-09 03:16:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:55.107789 | orchestrator | 2026-04-09 03:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:16:58.160593 | orchestrator | 2026-04-09 03:16:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:16:58.163036 | orchestrator | 2026-04-09 03:16:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:16:58.163080 | orchestrator | 2026-04-09 03:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:01.205221 | orchestrator | 2026-04-09 03:17:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:01.206013 | orchestrator | 2026-04-09 03:17:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:01.206456 | orchestrator | 2026-04-09 03:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:04.258129 | orchestrator | 2026-04-09 03:17:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:04.259957 | orchestrator | 2026-04-09 03:17:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:04.260005 | orchestrator | 2026-04-09 03:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:07.306691 | orchestrator | 2026-04-09 03:17:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:07.307457 | orchestrator | 2026-04-09 03:17:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:07.307482 | orchestrator | 2026-04-09 03:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:10.353769 | orchestrator | 2026-04-09 03:17:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:10.354545 | orchestrator | 2026-04-09 03:17:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:10.354576 | orchestrator | 2026-04-09 03:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:13.409158 | orchestrator | 2026-04-09 03:17:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:13.410550 | orchestrator | 2026-04-09 03:17:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:13.410583 | orchestrator | 2026-04-09 03:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:16.456485 | orchestrator | 2026-04-09 03:17:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:16.456631 | orchestrator | 2026-04-09 03:17:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:16.456645 | orchestrator | 2026-04-09 03:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:19.501615 | orchestrator | 2026-04-09 03:17:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:19.503462 | orchestrator | 2026-04-09 03:17:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:19.503528 | orchestrator | 2026-04-09 03:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:22.555107 | orchestrator | 2026-04-09 03:17:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:22.557699 | orchestrator | 2026-04-09 03:17:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:22.557747 | orchestrator | 2026-04-09 03:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:25.609695 | orchestrator | 2026-04-09 03:17:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:25.610733 | orchestrator | 2026-04-09 03:17:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:25.611083 | orchestrator | 2026-04-09 03:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:28.663526 | orchestrator | 2026-04-09 03:17:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:28.666423 | orchestrator | 2026-04-09 03:17:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:28.666526 | orchestrator | 2026-04-09 03:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:31.717784 | orchestrator | 2026-04-09 03:17:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:31.718919 | orchestrator | 2026-04-09 03:17:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:31.718952 | orchestrator | 2026-04-09 03:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:34.772011 | orchestrator | 2026-04-09 03:17:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:34.774822 | orchestrator | 2026-04-09 03:17:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:34.774892 | orchestrator | 2026-04-09 03:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:37.833471 | orchestrator | 2026-04-09 03:17:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:37.833617 | orchestrator | 2026-04-09 03:17:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:37.833637 | orchestrator | 2026-04-09 03:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:40.876601 | orchestrator | 2026-04-09 03:17:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:40.877595 | orchestrator | 2026-04-09 03:17:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:40.877656 | orchestrator | 2026-04-09 03:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:43.931137 | orchestrator | 2026-04-09 03:17:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:43.933150 | orchestrator | 2026-04-09 03:17:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:43.933608 | orchestrator | 2026-04-09 03:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:46.985542 | orchestrator | 2026-04-09 03:17:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:46.987534 | orchestrator | 2026-04-09 03:17:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:46.987586 | orchestrator | 2026-04-09 03:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:50.043332 | orchestrator | 2026-04-09 03:17:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:50.044143 | orchestrator | 2026-04-09 03:17:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:50.044167 | orchestrator | 2026-04-09 03:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:53.095308 | orchestrator | 2026-04-09 03:17:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:53.096704 | orchestrator | 2026-04-09 03:17:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:53.096761 | orchestrator | 2026-04-09 03:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:56.139234 | orchestrator | 2026-04-09 03:17:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:56.141118 | orchestrator | 2026-04-09 03:17:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:56.141197 | orchestrator | 2026-04-09 03:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:17:59.191031 | orchestrator | 2026-04-09 03:17:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:17:59.192364 | orchestrator | 2026-04-09 03:17:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:17:59.192474 | orchestrator | 2026-04-09 03:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:02.243056 | orchestrator | 2026-04-09 03:18:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:02.244129 | orchestrator | 2026-04-09 03:18:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:02.244171 | orchestrator | 2026-04-09 03:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:05.296219 | orchestrator | 2026-04-09 03:18:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:05.297853 | orchestrator | 2026-04-09 03:18:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:05.297906 | orchestrator | 2026-04-09 03:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:08.346541 | orchestrator | 2026-04-09 03:18:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:08.348644 | orchestrator | 2026-04-09 03:18:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:08.348732 | orchestrator | 2026-04-09 03:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:11.397245 | orchestrator | 2026-04-09 03:18:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:11.397678 | orchestrator | 2026-04-09 03:18:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:11.397720 | orchestrator | 2026-04-09 03:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:14.447467 | orchestrator | 2026-04-09 03:18:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:14.449986 | orchestrator | 2026-04-09 03:18:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:14.450117 | orchestrator | 2026-04-09 03:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:17.502418 | orchestrator | 2026-04-09 03:18:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:17.502626 | orchestrator | 2026-04-09 03:18:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:17.502692 | orchestrator | 2026-04-09 03:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:20.548852 | orchestrator | 2026-04-09 03:18:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:20.549928 | orchestrator | 2026-04-09 03:18:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:20.549981 | orchestrator | 2026-04-09 03:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:23.601761 | orchestrator | 2026-04-09 03:18:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:23.602964 | orchestrator | 2026-04-09 03:18:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:23.603010 | orchestrator | 2026-04-09 03:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:26.653986 | orchestrator | 2026-04-09 03:18:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:26.654969 | orchestrator | 2026-04-09 03:18:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:26.655017 | orchestrator | 2026-04-09 03:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:29.698552 | orchestrator | 2026-04-09 03:18:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:29.699100 | orchestrator | 2026-04-09 03:18:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:29.699225 | orchestrator | 2026-04-09 03:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:32.739679 | orchestrator | 2026-04-09 03:18:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:32.740210 | orchestrator | 2026-04-09 03:18:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:32.740254 | orchestrator | 2026-04-09 03:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:35.786843 | orchestrator | 2026-04-09 03:18:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:35.789162 | orchestrator | 2026-04-09 03:18:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:35.789212 | orchestrator | 2026-04-09 03:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:38.836247 | orchestrator | 2026-04-09 03:18:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:38.837433 | orchestrator | 2026-04-09 03:18:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:38.837496 | orchestrator | 2026-04-09 03:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:41.888297 | orchestrator | 2026-04-09 03:18:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:41.890062 | orchestrator | 2026-04-09 03:18:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:41.890306 | orchestrator | 2026-04-09 03:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:44.945794 | orchestrator | 2026-04-09 03:18:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:44.946781 | orchestrator | 2026-04-09 03:18:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:44.946800 | orchestrator | 2026-04-09 03:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:47.996835 | orchestrator | 2026-04-09 03:18:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:47.997591 | orchestrator | 2026-04-09 03:18:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:47.997869 | orchestrator | 2026-04-09 03:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:51.047699 | orchestrator | 2026-04-09 03:18:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:51.048339 | orchestrator | 2026-04-09 03:18:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:51.048545 | orchestrator | 2026-04-09 03:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:54.101022 | orchestrator | 2026-04-09 03:18:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:54.102151 | orchestrator | 2026-04-09 03:18:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:54.102479 | orchestrator | 2026-04-09 03:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:18:57.153117 | orchestrator | 2026-04-09 03:18:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:18:57.154930 | orchestrator | 2026-04-09 03:18:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:18:57.154984 | orchestrator | 2026-04-09 03:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:00.201958 | orchestrator | 2026-04-09 03:19:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:00.206571 | orchestrator | 2026-04-09 03:19:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:00.206655 | orchestrator | 2026-04-09 03:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:03.248237 | orchestrator | 2026-04-09 03:19:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:03.249831 | orchestrator | 2026-04-09 03:19:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:03.249984 | orchestrator | 2026-04-09 03:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:06.297199 | orchestrator | 2026-04-09 03:19:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:06.298796 | orchestrator | 2026-04-09 03:19:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:06.298854 | orchestrator | 2026-04-09 03:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:09.344564 | orchestrator | 2026-04-09 03:19:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:09.346429 | orchestrator | 2026-04-09 03:19:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:09.346609 | orchestrator | 2026-04-09 03:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:12.400141 | orchestrator | 2026-04-09 03:19:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:12.401298 | orchestrator | 2026-04-09 03:19:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:12.401368 | orchestrator | 2026-04-09 03:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:15.455032 | orchestrator | 2026-04-09 03:19:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:15.456737 | orchestrator | 2026-04-09 03:19:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:15.456775 | orchestrator | 2026-04-09 03:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:18.507975 | orchestrator | 2026-04-09 03:19:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:18.508946 | orchestrator | 2026-04-09 03:19:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:18.508984 | orchestrator | 2026-04-09 03:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:21.561168 | orchestrator | 2026-04-09 03:19:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:21.562439 | orchestrator | 2026-04-09 03:19:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:21.562531 | orchestrator | 2026-04-09 03:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:24.603296 | orchestrator | 2026-04-09 03:19:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:24.604034 | orchestrator | 2026-04-09 03:19:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:24.604072 | orchestrator | 2026-04-09 03:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:27.649730 | orchestrator | 2026-04-09 03:19:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:27.650885 | orchestrator | 2026-04-09 03:19:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:27.650932 | orchestrator | 2026-04-09 03:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:30.697374 | orchestrator | 2026-04-09 03:19:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:30.699819 | orchestrator | 2026-04-09 03:19:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:30.699895 | orchestrator | 2026-04-09 03:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:33.741489 | orchestrator | 2026-04-09 03:19:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:33.742739 | orchestrator | 2026-04-09 03:19:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:33.742807 | orchestrator | 2026-04-09 03:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:36.779655 | orchestrator | 2026-04-09 03:19:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:36.782489 | orchestrator | 2026-04-09 03:19:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:36.782570 | orchestrator | 2026-04-09 03:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:39.828611 | orchestrator | 2026-04-09 03:19:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:39.830515 | orchestrator | 2026-04-09 03:19:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:39.830590 | orchestrator | 2026-04-09 03:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:42.872237 | orchestrator | 2026-04-09 03:19:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:42.874157 | orchestrator | 2026-04-09 03:19:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:42.874274 | orchestrator | 2026-04-09 03:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:45.924953 | orchestrator | 2026-04-09 03:19:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:45.927120 | orchestrator | 2026-04-09 03:19:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:45.927203 | orchestrator | 2026-04-09 03:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:48.978576 | orchestrator | 2026-04-09 03:19:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:48.982690 | orchestrator | 2026-04-09 03:19:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:48.982748 | orchestrator | 2026-04-09 03:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:52.028408 | orchestrator | 2026-04-09 03:19:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:52.029965 | orchestrator | 2026-04-09 03:19:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:52.029995 | orchestrator | 2026-04-09 03:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:55.074556 | orchestrator | 2026-04-09 03:19:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:55.075991 | orchestrator | 2026-04-09 03:19:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:55.076051 | orchestrator | 2026-04-09 03:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:19:58.125160 | orchestrator | 2026-04-09 03:19:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:19:58.126934 | orchestrator | 2026-04-09 03:19:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:19:58.126958 | orchestrator | 2026-04-09 03:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:01.173290 | orchestrator | 2026-04-09 03:20:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:01.174645 | orchestrator | 2026-04-09 03:20:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:01.174741 | orchestrator | 2026-04-09 03:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:04.226458 | orchestrator | 2026-04-09 03:20:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:04.228626 | orchestrator | 2026-04-09 03:20:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:04.228685 | orchestrator | 2026-04-09 03:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:07.274074 | orchestrator | 2026-04-09 03:20:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:07.275339 | orchestrator | 2026-04-09 03:20:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:07.275389 | orchestrator | 2026-04-09 03:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:10.333490 | orchestrator | 2026-04-09 03:20:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:10.337293 | orchestrator | 2026-04-09 03:20:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:10.337369 | orchestrator | 2026-04-09 03:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:13.381820 | orchestrator | 2026-04-09 03:20:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:13.384494 | orchestrator | 2026-04-09 03:20:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:13.384549 | orchestrator | 2026-04-09 03:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:16.433480 | orchestrator | 2026-04-09 03:20:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:16.434544 | orchestrator | 2026-04-09 03:20:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:16.434854 | orchestrator | 2026-04-09 03:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:19.481796 | orchestrator | 2026-04-09 03:20:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:19.483779 | orchestrator | 2026-04-09 03:20:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:19.483823 | orchestrator | 2026-04-09 03:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:22.539386 | orchestrator | 2026-04-09 03:20:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:22.541898 | orchestrator | 2026-04-09 03:20:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:22.541954 | orchestrator | 2026-04-09 03:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:25.589588 | orchestrator | 2026-04-09 03:20:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:25.591587 | orchestrator | 2026-04-09 03:20:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:25.591621 | orchestrator | 2026-04-09 03:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:28.645158 | orchestrator | 2026-04-09 03:20:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:28.648009 | orchestrator | 2026-04-09 03:20:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:28.648073 | orchestrator | 2026-04-09 03:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:31.698968 | orchestrator | 2026-04-09 03:20:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:31.701097 | orchestrator | 2026-04-09 03:20:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:31.701170 | orchestrator | 2026-04-09 03:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:34.753411 | orchestrator | 2026-04-09 03:20:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:34.754844 | orchestrator | 2026-04-09 03:20:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:34.754908 | orchestrator | 2026-04-09 03:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:37.807645 | orchestrator | 2026-04-09 03:20:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:37.809655 | orchestrator | 2026-04-09 03:20:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:37.809777 | orchestrator | 2026-04-09 03:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:40.857837 | orchestrator | 2026-04-09 03:20:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:40.860734 | orchestrator | 2026-04-09 03:20:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:40.860812 | orchestrator | 2026-04-09 03:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:43.908644 | orchestrator | 2026-04-09 03:20:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:43.909548 | orchestrator | 2026-04-09 03:20:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:43.909700 | orchestrator | 2026-04-09 03:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:46.957153 | orchestrator | 2026-04-09 03:20:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:46.959773 | orchestrator | 2026-04-09 03:20:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:46.959841 | orchestrator | 2026-04-09 03:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:50.011876 | orchestrator | 2026-04-09 03:20:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:50.013651 | orchestrator | 2026-04-09 03:20:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:50.013694 | orchestrator | 2026-04-09 03:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:53.063563 | orchestrator | 2026-04-09 03:20:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:53.065819 | orchestrator | 2026-04-09 03:20:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:53.065890 | orchestrator | 2026-04-09 03:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:56.118772 | orchestrator | 2026-04-09 03:20:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:56.120317 | orchestrator | 2026-04-09 03:20:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:56.120371 | orchestrator | 2026-04-09 03:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:20:59.159563 | orchestrator | 2026-04-09 03:20:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:20:59.160933 | orchestrator | 2026-04-09 03:20:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:20:59.160960 | orchestrator | 2026-04-09 03:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:02.208592 | orchestrator | 2026-04-09 03:21:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:02.209923 | orchestrator | 2026-04-09 03:21:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:02.209969 | orchestrator | 2026-04-09 03:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:05.254723 | orchestrator | 2026-04-09 03:21:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:05.255869 | orchestrator | 2026-04-09 03:21:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:05.255929 | orchestrator | 2026-04-09 03:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:08.305147 | orchestrator | 2026-04-09 03:21:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:08.310274 | orchestrator | 2026-04-09 03:21:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:08.310355 | orchestrator | 2026-04-09 03:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:11.361111 | orchestrator | 2026-04-09 03:21:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:11.363591 | orchestrator | 2026-04-09 03:21:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:11.363644 | orchestrator | 2026-04-09 03:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:14.405173 | orchestrator | 2026-04-09 03:21:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:14.405808 | orchestrator | 2026-04-09 03:21:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:14.405884 | orchestrator | 2026-04-09 03:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:17.455444 | orchestrator | 2026-04-09 03:21:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:17.457165 | orchestrator | 2026-04-09 03:21:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:17.457495 | orchestrator | 2026-04-09 03:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:20.518861 | orchestrator | 2026-04-09 03:21:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:20.520126 | orchestrator | 2026-04-09 03:21:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:20.520193 | orchestrator | 2026-04-09 03:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:23.566961 | orchestrator | 2026-04-09 03:21:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:23.568131 | orchestrator | 2026-04-09 03:21:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:23.568199 | orchestrator | 2026-04-09 03:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:26.621796 | orchestrator | 2026-04-09 03:21:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:26.623033 | orchestrator | 2026-04-09 03:21:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:26.623091 | orchestrator | 2026-04-09 03:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:29.671140 | orchestrator | 2026-04-09 03:21:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:29.673107 | orchestrator | 2026-04-09 03:21:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:29.673163 | orchestrator | 2026-04-09 03:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:32.724242 | orchestrator | 2026-04-09 03:21:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:32.726289 | orchestrator | 2026-04-09 03:21:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:32.726477 | orchestrator | 2026-04-09 03:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:35.782376 | orchestrator | 2026-04-09 03:21:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:35.785034 | orchestrator | 2026-04-09 03:21:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:35.785118 | orchestrator | 2026-04-09 03:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:38.837965 | orchestrator | 2026-04-09 03:21:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:38.840488 | orchestrator | 2026-04-09 03:21:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:38.840535 | orchestrator | 2026-04-09 03:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:41.894645 | orchestrator | 2026-04-09 03:21:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:41.899039 | orchestrator | 2026-04-09 03:21:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:41.899151 | orchestrator | 2026-04-09 03:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:44.951871 | orchestrator | 2026-04-09 03:21:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:44.953984 | orchestrator | 2026-04-09 03:21:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:44.954167 | orchestrator | 2026-04-09 03:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:48.008752 | orchestrator | 2026-04-09 03:21:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:48.012911 | orchestrator | 2026-04-09 03:21:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:48.012984 | orchestrator | 2026-04-09 03:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:51.069065 | orchestrator | 2026-04-09 03:21:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:51.070956 | orchestrator | 2026-04-09 03:21:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:51.070993 | orchestrator | 2026-04-09 03:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:54.127641 | orchestrator | 2026-04-09 03:21:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:54.129048 | orchestrator | 2026-04-09 03:21:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:54.129097 | orchestrator | 2026-04-09 03:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:21:57.168801 | orchestrator | 2026-04-09 03:21:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:21:57.170014 | orchestrator | 2026-04-09 03:21:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:21:57.170151 | orchestrator | 2026-04-09 03:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:00.216111 | orchestrator | 2026-04-09 03:22:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:00.218339 | orchestrator | 2026-04-09 03:22:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:00.218429 | orchestrator | 2026-04-09 03:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:03.267548 | orchestrator | 2026-04-09 03:22:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:03.268505 | orchestrator | 2026-04-09 03:22:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:03.268541 | orchestrator | 2026-04-09 03:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:06.321954 | orchestrator | 2026-04-09 03:22:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:06.322580 | orchestrator | 2026-04-09 03:22:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:06.322607 | orchestrator | 2026-04-09 03:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:09.373050 | orchestrator | 2026-04-09 03:22:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:09.374957 | orchestrator | 2026-04-09 03:22:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:09.375085 | orchestrator | 2026-04-09 03:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:12.428668 | orchestrator | 2026-04-09 03:22:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:12.430336 | orchestrator | 2026-04-09 03:22:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:12.430489 | orchestrator | 2026-04-09 03:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:15.480886 | orchestrator | 2026-04-09 03:22:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:15.482979 | orchestrator | 2026-04-09 03:22:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:15.483033 | orchestrator | 2026-04-09 03:22:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:18.538164 | orchestrator | 2026-04-09 03:22:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:18.540463 | orchestrator | 2026-04-09 03:22:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:18.540550 | orchestrator | 2026-04-09 03:22:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:21.599712 | orchestrator | 2026-04-09 03:22:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:21.600656 | orchestrator | 2026-04-09 03:22:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:21.600695 | orchestrator | 2026-04-09 03:22:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:24.641137 | orchestrator | 2026-04-09 03:22:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:24.643106 | orchestrator | 2026-04-09 03:22:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:24.643177 | orchestrator | 2026-04-09 03:22:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:27.690619 | orchestrator | 2026-04-09 03:22:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:27.693886 | orchestrator | 2026-04-09 03:22:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:27.693974 | orchestrator | 2026-04-09 03:22:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:30.743770 | orchestrator | 2026-04-09 03:22:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:30.745533 | orchestrator | 2026-04-09 03:22:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:30.745590 | orchestrator | 2026-04-09 03:22:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:33.793018 | orchestrator | 2026-04-09 03:22:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:33.795645 | orchestrator | 2026-04-09 03:22:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:33.795737 | orchestrator | 2026-04-09 03:22:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:36.843862 | orchestrator | 2026-04-09 03:22:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:36.844967 | orchestrator | 2026-04-09 03:22:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:36.845007 | orchestrator | 2026-04-09 03:22:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:39.897603 | orchestrator | 2026-04-09 03:22:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:39.898753 | orchestrator | 2026-04-09 03:22:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:39.898795 | orchestrator | 2026-04-09 03:22:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:42.941378 | orchestrator | 2026-04-09 03:22:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:42.942554 | orchestrator | 2026-04-09 03:22:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:42.942697 | orchestrator | 2026-04-09 03:22:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:45.994161 | orchestrator | 2026-04-09 03:22:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:45.995733 | orchestrator | 2026-04-09 03:22:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:45.995778 | orchestrator | 2026-04-09 03:22:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:49.044637 | orchestrator | 2026-04-09 03:22:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:49.046293 | orchestrator | 2026-04-09 03:22:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:49.046378 | orchestrator | 2026-04-09 03:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:52.101833 | orchestrator | 2026-04-09 03:22:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:52.103276 | orchestrator | 2026-04-09 03:22:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:52.103313 | orchestrator | 2026-04-09 03:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:55.144690 | orchestrator | 2026-04-09 03:22:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:55.146864 | orchestrator | 2026-04-09 03:22:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:55.146925 | orchestrator | 2026-04-09 03:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:22:58.197859 | orchestrator | 2026-04-09 03:22:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:22:58.200650 | orchestrator | 2026-04-09 03:22:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:22:58.200705 | orchestrator | 2026-04-09 03:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:01.250072 | orchestrator | 2026-04-09 03:23:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:01.251760 | orchestrator | 2026-04-09 03:23:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:01.252807 | orchestrator | 2026-04-09 03:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:04.302993 | orchestrator | 2026-04-09 03:23:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:04.304676 | orchestrator | 2026-04-09 03:23:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:04.304772 | orchestrator | 2026-04-09 03:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:07.352166 | orchestrator | 2026-04-09 03:23:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:07.353899 | orchestrator | 2026-04-09 03:23:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:07.353961 | orchestrator | 2026-04-09 03:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:10.395718 | orchestrator | 2026-04-09 03:23:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:10.397802 | orchestrator | 2026-04-09 03:23:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:10.397886 | orchestrator | 2026-04-09 03:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:13.446265 | orchestrator | 2026-04-09 03:23:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:13.448226 | orchestrator | 2026-04-09 03:23:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:13.448380 | orchestrator | 2026-04-09 03:23:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:16.495281 | orchestrator | 2026-04-09 03:23:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:16.496868 | orchestrator | 2026-04-09 03:23:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:16.497096 | orchestrator | 2026-04-09 03:23:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:19.542210 | orchestrator | 2026-04-09 03:23:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:19.543767 | orchestrator | 2026-04-09 03:23:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:19.543879 | orchestrator | 2026-04-09 03:23:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:22.591440 | orchestrator | 2026-04-09 03:23:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:22.593138 | orchestrator | 2026-04-09 03:23:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:22.593368 | orchestrator | 2026-04-09 03:23:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:25.641823 | orchestrator | 2026-04-09 03:23:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:25.644211 | orchestrator | 2026-04-09 03:23:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:25.644368 | orchestrator | 2026-04-09 03:23:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:28.699296 | orchestrator | 2026-04-09 03:23:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:28.702564 | orchestrator | 2026-04-09 03:23:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:28.702697 | orchestrator | 2026-04-09 03:23:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:31.747401 | orchestrator | 2026-04-09 03:23:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:31.748392 | orchestrator | 2026-04-09 03:23:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:31.748669 | orchestrator | 2026-04-09 03:23:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:34.795779 | orchestrator | 2026-04-09 03:23:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:34.797428 | orchestrator | 2026-04-09 03:23:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:34.797563 | orchestrator | 2026-04-09 03:23:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:37.850437 | orchestrator | 2026-04-09 03:23:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:37.851867 | orchestrator | 2026-04-09 03:23:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:37.851929 | orchestrator | 2026-04-09 03:23:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:40.904629 | orchestrator | 2026-04-09 03:23:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:40.906385 | orchestrator | 2026-04-09 03:23:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:40.906460 | orchestrator | 2026-04-09 03:23:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:43.958195 | orchestrator | 2026-04-09 03:23:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:43.960102 | orchestrator | 2026-04-09 03:23:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:43.960163 | orchestrator | 2026-04-09 03:23:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:47.007312 | orchestrator | 2026-04-09 03:23:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:47.008566 | orchestrator | 2026-04-09 03:23:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:47.008634 | orchestrator | 2026-04-09 03:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:50.063146 | orchestrator | 2026-04-09 03:23:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:50.064838 | orchestrator | 2026-04-09 03:23:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:50.064869 | orchestrator | 2026-04-09 03:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:53.111506 | orchestrator | 2026-04-09 03:23:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:53.113084 | orchestrator | 2026-04-09 03:23:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:53.113123 | orchestrator | 2026-04-09 03:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:56.163936 | orchestrator | 2026-04-09 03:23:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:56.167362 | orchestrator | 2026-04-09 03:23:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:56.167476 | orchestrator | 2026-04-09 03:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:23:59.209734 | orchestrator | 2026-04-09 03:23:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:23:59.212210 | orchestrator | 2026-04-09 03:23:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:23:59.212325 | orchestrator | 2026-04-09 03:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:02.254077 | orchestrator | 2026-04-09 03:24:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:02.255840 | orchestrator | 2026-04-09 03:24:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:02.256044 | orchestrator | 2026-04-09 03:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:05.301880 | orchestrator | 2026-04-09 03:24:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:05.304020 | orchestrator | 2026-04-09 03:24:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:05.304111 | orchestrator | 2026-04-09 03:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:08.350394 | orchestrator | 2026-04-09 03:24:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:08.352187 | orchestrator | 2026-04-09 03:24:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:08.352252 | orchestrator | 2026-04-09 03:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:11.407235 | orchestrator | 2026-04-09 03:24:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:11.408882 | orchestrator | 2026-04-09 03:24:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:11.408911 | orchestrator | 2026-04-09 03:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:14.455512 | orchestrator | 2026-04-09 03:24:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:14.457774 | orchestrator | 2026-04-09 03:24:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:14.457871 | orchestrator | 2026-04-09 03:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:17.500869 | orchestrator | 2026-04-09 03:24:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:17.502256 | orchestrator | 2026-04-09 03:24:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:17.502305 | orchestrator | 2026-04-09 03:24:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:20.542899 | orchestrator | 2026-04-09 03:24:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:20.543573 | orchestrator | 2026-04-09 03:24:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:20.543605 | orchestrator | 2026-04-09 03:24:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:23.585240 | orchestrator | 2026-04-09 03:24:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:23.588612 | orchestrator | 2026-04-09 03:24:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:23.588688 | orchestrator | 2026-04-09 03:24:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:26.631797 | orchestrator | 2026-04-09 03:24:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:26.633895 | orchestrator | 2026-04-09 03:24:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:26.633946 | orchestrator | 2026-04-09 03:24:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:29.675973 | orchestrator | 2026-04-09 03:24:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:29.678221 | orchestrator | 2026-04-09 03:24:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:29.678348 | orchestrator | 2026-04-09 03:24:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:32.734398 | orchestrator | 2026-04-09 03:24:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:32.735455 | orchestrator | 2026-04-09 03:24:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:32.735979 | orchestrator | 2026-04-09 03:24:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:35.787427 | orchestrator | 2026-04-09 03:24:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:35.790345 | orchestrator | 2026-04-09 03:24:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:35.790542 | orchestrator | 2026-04-09 03:24:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:38.838353 | orchestrator | 2026-04-09 03:24:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:38.839849 | orchestrator | 2026-04-09 03:24:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:38.839882 | orchestrator | 2026-04-09 03:24:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:41.893390 | orchestrator | 2026-04-09 03:24:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:41.894178 | orchestrator | 2026-04-09 03:24:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:41.894228 | orchestrator | 2026-04-09 03:24:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:44.937162 | orchestrator | 2026-04-09 03:24:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:44.938081 | orchestrator | 2026-04-09 03:24:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:44.938129 | orchestrator | 2026-04-09 03:24:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:47.983966 | orchestrator | 2026-04-09 03:24:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:47.987479 | orchestrator | 2026-04-09 03:24:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:47.987578 | orchestrator | 2026-04-09 03:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:51.033696 | orchestrator | 2026-04-09 03:24:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:51.035124 | orchestrator | 2026-04-09 03:24:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:51.035179 | orchestrator | 2026-04-09 03:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:54.077569 | orchestrator | 2026-04-09 03:24:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:54.079792 | orchestrator | 2026-04-09 03:24:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:54.079859 | orchestrator | 2026-04-09 03:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:24:57.125019 | orchestrator | 2026-04-09 03:24:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:24:57.125791 | orchestrator | 2026-04-09 03:24:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:24:57.125868 | orchestrator | 2026-04-09 03:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:00.172025 | orchestrator | 2026-04-09 03:25:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:00.174091 | orchestrator | 2026-04-09 03:25:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:00.174630 | orchestrator | 2026-04-09 03:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:03.218998 | orchestrator | 2026-04-09 03:25:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:03.220056 | orchestrator | 2026-04-09 03:25:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:03.220425 | orchestrator | 2026-04-09 03:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:06.274606 | orchestrator | 2026-04-09 03:25:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:06.275851 | orchestrator | 2026-04-09 03:25:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:06.275903 | orchestrator | 2026-04-09 03:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:09.327371 | orchestrator | 2026-04-09 03:25:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:09.328670 | orchestrator | 2026-04-09 03:25:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:09.328709 | orchestrator | 2026-04-09 03:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:12.383498 | orchestrator | 2026-04-09 03:25:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:12.385019 | orchestrator | 2026-04-09 03:25:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:12.385062 | orchestrator | 2026-04-09 03:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:15.439174 | orchestrator | 2026-04-09 03:25:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:15.440851 | orchestrator | 2026-04-09 03:25:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:15.440929 | orchestrator | 2026-04-09 03:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:18.498552 | orchestrator | 2026-04-09 03:25:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:18.502497 | orchestrator | 2026-04-09 03:25:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:18.502584 | orchestrator | 2026-04-09 03:25:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:21.551704 | orchestrator | 2026-04-09 03:25:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:21.554496 | orchestrator | 2026-04-09 03:25:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:21.554744 | orchestrator | 2026-04-09 03:25:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:24.596712 | orchestrator | 2026-04-09 03:25:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:24.599701 | orchestrator | 2026-04-09 03:25:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:24.599829 | orchestrator | 2026-04-09 03:25:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:27.647299 | orchestrator | 2026-04-09 03:25:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:27.647686 | orchestrator | 2026-04-09 03:25:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:27.647726 | orchestrator | 2026-04-09 03:25:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:30.703027 | orchestrator | 2026-04-09 03:25:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:30.705124 | orchestrator | 2026-04-09 03:25:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:30.705186 | orchestrator | 2026-04-09 03:25:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:33.754741 | orchestrator | 2026-04-09 03:25:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:33.756469 | orchestrator | 2026-04-09 03:25:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:33.756531 | orchestrator | 2026-04-09 03:25:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:36.808174 | orchestrator | 2026-04-09 03:25:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:36.809658 | orchestrator | 2026-04-09 03:25:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:36.809691 | orchestrator | 2026-04-09 03:25:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:39.854853 | orchestrator | 2026-04-09 03:25:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:39.855747 | orchestrator | 2026-04-09 03:25:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:39.855790 | orchestrator | 2026-04-09 03:25:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:42.904162 | orchestrator | 2026-04-09 03:25:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:42.907246 | orchestrator | 2026-04-09 03:25:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:42.907479 | orchestrator | 2026-04-09 03:25:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:45.954859 | orchestrator | 2026-04-09 03:25:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:45.956457 | orchestrator | 2026-04-09 03:25:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:45.956554 | orchestrator | 2026-04-09 03:25:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:49.003088 | orchestrator | 2026-04-09 03:25:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:49.008664 | orchestrator | 2026-04-09 03:25:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:49.008763 | orchestrator | 2026-04-09 03:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:52.063364 | orchestrator | 2026-04-09 03:25:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:52.066615 | orchestrator | 2026-04-09 03:25:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:52.066707 | orchestrator | 2026-04-09 03:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:55.098832 | orchestrator | 2026-04-09 03:25:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:55.100767 | orchestrator | 2026-04-09 03:25:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:55.100869 | orchestrator | 2026-04-09 03:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:25:58.152981 | orchestrator | 2026-04-09 03:25:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:25:58.153340 | orchestrator | 2026-04-09 03:25:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:25:58.153375 | orchestrator | 2026-04-09 03:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:01.198638 | orchestrator | 2026-04-09 03:26:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:01.200469 | orchestrator | 2026-04-09 03:26:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:01.200532 | orchestrator | 2026-04-09 03:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:04.248819 | orchestrator | 2026-04-09 03:26:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:04.250986 | orchestrator | 2026-04-09 03:26:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:04.251014 | orchestrator | 2026-04-09 03:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:07.295118 | orchestrator | 2026-04-09 03:26:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:07.295766 | orchestrator | 2026-04-09 03:26:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:07.295896 | orchestrator | 2026-04-09 03:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:10.337522 | orchestrator | 2026-04-09 03:26:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:10.340355 | orchestrator | 2026-04-09 03:26:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:10.340454 | orchestrator | 2026-04-09 03:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:13.393480 | orchestrator | 2026-04-09 03:26:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:13.395115 | orchestrator | 2026-04-09 03:26:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:13.395164 | orchestrator | 2026-04-09 03:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:16.435874 | orchestrator | 2026-04-09 03:26:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:16.436065 | orchestrator | 2026-04-09 03:26:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:16.436096 | orchestrator | 2026-04-09 03:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:19.478002 | orchestrator | 2026-04-09 03:26:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:19.479300 | orchestrator | 2026-04-09 03:26:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:19.479342 | orchestrator | 2026-04-09 03:26:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:22.521411 | orchestrator | 2026-04-09 03:26:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:22.522934 | orchestrator | 2026-04-09 03:26:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:22.522962 | orchestrator | 2026-04-09 03:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:25.573011 | orchestrator | 2026-04-09 03:26:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:25.575482 | orchestrator | 2026-04-09 03:26:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:25.575525 | orchestrator | 2026-04-09 03:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:28.624126 | orchestrator | 2026-04-09 03:26:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:28.624340 | orchestrator | 2026-04-09 03:26:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:28.624480 | orchestrator | 2026-04-09 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:31.677597 | orchestrator | 2026-04-09 03:26:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:31.679812 | orchestrator | 2026-04-09 03:26:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:31.679846 | orchestrator | 2026-04-09 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:34.732120 | orchestrator | 2026-04-09 03:26:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:34.732680 | orchestrator | 2026-04-09 03:26:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:34.732718 | orchestrator | 2026-04-09 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:37.785879 | orchestrator | 2026-04-09 03:26:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:37.787430 | orchestrator | 2026-04-09 03:26:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:37.787555 | orchestrator | 2026-04-09 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:40.837452 | orchestrator | 2026-04-09 03:26:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:40.838785 | orchestrator | 2026-04-09 03:26:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:40.838831 | orchestrator | 2026-04-09 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:43.886331 | orchestrator | 2026-04-09 03:26:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:43.888363 | orchestrator | 2026-04-09 03:26:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:43.888419 | orchestrator | 2026-04-09 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:46.929988 | orchestrator | 2026-04-09 03:26:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:46.932914 | orchestrator | 2026-04-09 03:26:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:46.932967 | orchestrator | 2026-04-09 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:49.979917 | orchestrator | 2026-04-09 03:26:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:49.981340 | orchestrator | 2026-04-09 03:26:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:49.981376 | orchestrator | 2026-04-09 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:53.029967 | orchestrator | 2026-04-09 03:26:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:53.031572 | orchestrator | 2026-04-09 03:26:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:53.031642 | orchestrator | 2026-04-09 03:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:56.080979 | orchestrator | 2026-04-09 03:26:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:56.082739 | orchestrator | 2026-04-09 03:26:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:56.082805 | orchestrator | 2026-04-09 03:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:26:59.142662 | orchestrator | 2026-04-09 03:26:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:26:59.147284 | orchestrator | 2026-04-09 03:26:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:26:59.147384 | orchestrator | 2026-04-09 03:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:02.198561 | orchestrator | 2026-04-09 03:27:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:02.200063 | orchestrator | 2026-04-09 03:27:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:02.200202 | orchestrator | 2026-04-09 03:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:05.247766 | orchestrator | 2026-04-09 03:27:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:05.250004 | orchestrator | 2026-04-09 03:27:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:05.250172 | orchestrator | 2026-04-09 03:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:08.296615 | orchestrator | 2026-04-09 03:27:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:08.299438 | orchestrator | 2026-04-09 03:27:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:08.299490 | orchestrator | 2026-04-09 03:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:11.351845 | orchestrator | 2026-04-09 03:27:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:11.351984 | orchestrator | 2026-04-09 03:27:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:11.352002 | orchestrator | 2026-04-09 03:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:14.401417 | orchestrator | 2026-04-09 03:27:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:14.402736 | orchestrator | 2026-04-09 03:27:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:14.402778 | orchestrator | 2026-04-09 03:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:17.449637 | orchestrator | 2026-04-09 03:27:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:17.451280 | orchestrator | 2026-04-09 03:27:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:17.451341 | orchestrator | 2026-04-09 03:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:20.498174 | orchestrator | 2026-04-09 03:27:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:20.500992 | orchestrator | 2026-04-09 03:27:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:20.501045 | orchestrator | 2026-04-09 03:27:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:23.549046 | orchestrator | 2026-04-09 03:27:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:23.551930 | orchestrator | 2026-04-09 03:27:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:23.552010 | orchestrator | 2026-04-09 03:27:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:26.594885 | orchestrator | 2026-04-09 03:27:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:26.595823 | orchestrator | 2026-04-09 03:27:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:26.595853 | orchestrator | 2026-04-09 03:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:29.647036 | orchestrator | 2026-04-09 03:27:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:29.650474 | orchestrator | 2026-04-09 03:27:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:29.650539 | orchestrator | 2026-04-09 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:32.697755 | orchestrator | 2026-04-09 03:27:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:32.699588 | orchestrator | 2026-04-09 03:27:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:32.699647 | orchestrator | 2026-04-09 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:35.747405 | orchestrator | 2026-04-09 03:27:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:35.749532 | orchestrator | 2026-04-09 03:27:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:35.749572 | orchestrator | 2026-04-09 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:38.802427 | orchestrator | 2026-04-09 03:27:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:38.804003 | orchestrator | 2026-04-09 03:27:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:38.804181 | orchestrator | 2026-04-09 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:41.844499 | orchestrator | 2026-04-09 03:27:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:41.845222 | orchestrator | 2026-04-09 03:27:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:41.845253 | orchestrator | 2026-04-09 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:44.904368 | orchestrator | 2026-04-09 03:27:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:44.905656 | orchestrator | 2026-04-09 03:27:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:44.905706 | orchestrator | 2026-04-09 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:47.953829 | orchestrator | 2026-04-09 03:27:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:47.956996 | orchestrator | 2026-04-09 03:27:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:47.957070 | orchestrator | 2026-04-09 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:51.006449 | orchestrator | 2026-04-09 03:27:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:51.007464 | orchestrator | 2026-04-09 03:27:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:51.007511 | orchestrator | 2026-04-09 03:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:54.054003 | orchestrator | 2026-04-09 03:27:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:54.056601 | orchestrator | 2026-04-09 03:27:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:54.056660 | orchestrator | 2026-04-09 03:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:27:57.097292 | orchestrator | 2026-04-09 03:27:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:27:57.099016 | orchestrator | 2026-04-09 03:27:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:27:57.099162 | orchestrator | 2026-04-09 03:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:00.143235 | orchestrator | 2026-04-09 03:28:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:00.144661 | orchestrator | 2026-04-09 03:28:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:00.144717 | orchestrator | 2026-04-09 03:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:03.182584 | orchestrator | 2026-04-09 03:28:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:03.182780 | orchestrator | 2026-04-09 03:28:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:03.182803 | orchestrator | 2026-04-09 03:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:06.229627 | orchestrator | 2026-04-09 03:28:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:06.230783 | orchestrator | 2026-04-09 03:28:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:06.230853 | orchestrator | 2026-04-09 03:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:09.284798 | orchestrator | 2026-04-09 03:28:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:09.286045 | orchestrator | 2026-04-09 03:28:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:09.286152 | orchestrator | 2026-04-09 03:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:12.328378 | orchestrator | 2026-04-09 03:28:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:12.330890 | orchestrator | 2026-04-09 03:28:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:12.330955 | orchestrator | 2026-04-09 03:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:15.382566 | orchestrator | 2026-04-09 03:28:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:15.384048 | orchestrator | 2026-04-09 03:28:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:15.384149 | orchestrator | 2026-04-09 03:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:18.433216 | orchestrator | 2026-04-09 03:28:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:18.435885 | orchestrator | 2026-04-09 03:28:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:18.435953 | orchestrator | 2026-04-09 03:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:21.488269 | orchestrator | 2026-04-09 03:28:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:21.490331 | orchestrator | 2026-04-09 03:28:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:21.490379 | orchestrator | 2026-04-09 03:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:24.541054 | orchestrator | 2026-04-09 03:28:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:24.543360 | orchestrator | 2026-04-09 03:28:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:24.543445 | orchestrator | 2026-04-09 03:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:27.592660 | orchestrator | 2026-04-09 03:28:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:27.594267 | orchestrator | 2026-04-09 03:28:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:27.594401 | orchestrator | 2026-04-09 03:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:30.646757 | orchestrator | 2026-04-09 03:28:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:30.649582 | orchestrator | 2026-04-09 03:28:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:30.649689 | orchestrator | 2026-04-09 03:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:33.695861 | orchestrator | 2026-04-09 03:28:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:33.697201 | orchestrator | 2026-04-09 03:28:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:33.697249 | orchestrator | 2026-04-09 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:36.746366 | orchestrator | 2026-04-09 03:28:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:36.747336 | orchestrator | 2026-04-09 03:28:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:36.747382 | orchestrator | 2026-04-09 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:39.792576 | orchestrator | 2026-04-09 03:28:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:39.794735 | orchestrator | 2026-04-09 03:28:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:39.794800 | orchestrator | 2026-04-09 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:42.844356 | orchestrator | 2026-04-09 03:28:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:42.847025 | orchestrator | 2026-04-09 03:28:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:42.847130 | orchestrator | 2026-04-09 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:45.898482 | orchestrator | 2026-04-09 03:28:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:45.900421 | orchestrator | 2026-04-09 03:28:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:45.900483 | orchestrator | 2026-04-09 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:48.942795 | orchestrator | 2026-04-09 03:28:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:48.944907 | orchestrator | 2026-04-09 03:28:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:48.945037 | orchestrator | 2026-04-09 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:51.999872 | orchestrator | 2026-04-09 03:28:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:52.002185 | orchestrator | 2026-04-09 03:28:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:52.002263 | orchestrator | 2026-04-09 03:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:55.055529 | orchestrator | 2026-04-09 03:28:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:55.057851 | orchestrator | 2026-04-09 03:28:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:55.057910 | orchestrator | 2026-04-09 03:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:28:58.102407 | orchestrator | 2026-04-09 03:28:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:28:58.104688 | orchestrator | 2026-04-09 03:28:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:28:58.105341 | orchestrator | 2026-04-09 03:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:01.154326 | orchestrator | 2026-04-09 03:29:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:01.156496 | orchestrator | 2026-04-09 03:29:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:01.156653 | orchestrator | 2026-04-09 03:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:04.205943 | orchestrator | 2026-04-09 03:29:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:04.207886 | orchestrator | 2026-04-09 03:29:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:04.207942 | orchestrator | 2026-04-09 03:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:07.256941 | orchestrator | 2026-04-09 03:29:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:07.259485 | orchestrator | 2026-04-09 03:29:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:07.259550 | orchestrator | 2026-04-09 03:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:10.311005 | orchestrator | 2026-04-09 03:29:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:10.312915 | orchestrator | 2026-04-09 03:29:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:10.312954 | orchestrator | 2026-04-09 03:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:13.363528 | orchestrator | 2026-04-09 03:29:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:13.365815 | orchestrator | 2026-04-09 03:29:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:13.365884 | orchestrator | 2026-04-09 03:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:16.415161 | orchestrator | 2026-04-09 03:29:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:16.417715 | orchestrator | 2026-04-09 03:29:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:16.417958 | orchestrator | 2026-04-09 03:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:19.466763 | orchestrator | 2026-04-09 03:29:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:19.468703 | orchestrator | 2026-04-09 03:29:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:19.468852 | orchestrator | 2026-04-09 03:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:22.512935 | orchestrator | 2026-04-09 03:29:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:22.513719 | orchestrator | 2026-04-09 03:29:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:22.513761 | orchestrator | 2026-04-09 03:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:25.560983 | orchestrator | 2026-04-09 03:29:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:25.563193 | orchestrator | 2026-04-09 03:29:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:25.563244 | orchestrator | 2026-04-09 03:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:28.615925 | orchestrator | 2026-04-09 03:29:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:28.618114 | orchestrator | 2026-04-09 03:29:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:28.618179 | orchestrator | 2026-04-09 03:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:31.676990 | orchestrator | 2026-04-09 03:29:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:31.678822 | orchestrator | 2026-04-09 03:29:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:31.679377 | orchestrator | 2026-04-09 03:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:34.727389 | orchestrator | 2026-04-09 03:29:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:34.729915 | orchestrator | 2026-04-09 03:29:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:34.729957 | orchestrator | 2026-04-09 03:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:37.780557 | orchestrator | 2026-04-09 03:29:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:37.782112 | orchestrator | 2026-04-09 03:29:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:37.782241 | orchestrator | 2026-04-09 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:40.837444 | orchestrator | 2026-04-09 03:29:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:40.839009 | orchestrator | 2026-04-09 03:29:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:40.839619 | orchestrator | 2026-04-09 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:43.893100 | orchestrator | 2026-04-09 03:29:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:43.895227 | orchestrator | 2026-04-09 03:29:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:43.895436 | orchestrator | 2026-04-09 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:46.950511 | orchestrator | 2026-04-09 03:29:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:46.952381 | orchestrator | 2026-04-09 03:29:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:46.952629 | orchestrator | 2026-04-09 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:50.003412 | orchestrator | 2026-04-09 03:29:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:50.005797 | orchestrator | 2026-04-09 03:29:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:50.005859 | orchestrator | 2026-04-09 03:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:53.050587 | orchestrator | 2026-04-09 03:29:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:53.053610 | orchestrator | 2026-04-09 03:29:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:53.053735 | orchestrator | 2026-04-09 03:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:56.101725 | orchestrator | 2026-04-09 03:29:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:56.103250 | orchestrator | 2026-04-09 03:29:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:56.103354 | orchestrator | 2026-04-09 03:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:29:59.157951 | orchestrator | 2026-04-09 03:29:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:29:59.161813 | orchestrator | 2026-04-09 03:29:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:29:59.161890 | orchestrator | 2026-04-09 03:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:02.225216 | orchestrator | 2026-04-09 03:30:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:02.226609 | orchestrator | 2026-04-09 03:30:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:02.226743 | orchestrator | 2026-04-09 03:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:05.280565 | orchestrator | 2026-04-09 03:30:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:05.282260 | orchestrator | 2026-04-09 03:30:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:05.282382 | orchestrator | 2026-04-09 03:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:08.337202 | orchestrator | 2026-04-09 03:30:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:08.338125 | orchestrator | 2026-04-09 03:30:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:08.338164 | orchestrator | 2026-04-09 03:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:11.387870 | orchestrator | 2026-04-09 03:30:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:11.388573 | orchestrator | 2026-04-09 03:30:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:11.388606 | orchestrator | 2026-04-09 03:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:14.436888 | orchestrator | 2026-04-09 03:30:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:14.437109 | orchestrator | 2026-04-09 03:30:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:14.437135 | orchestrator | 2026-04-09 03:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:17.484950 | orchestrator | 2026-04-09 03:30:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:17.485212 | orchestrator | 2026-04-09 03:30:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:17.485237 | orchestrator | 2026-04-09 03:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:20.523036 | orchestrator | 2026-04-09 03:30:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:20.523167 | orchestrator | 2026-04-09 03:30:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:20.523284 | orchestrator | 2026-04-09 03:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:23.561041 | orchestrator | 2026-04-09 03:30:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:23.562091 | orchestrator | 2026-04-09 03:30:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:23.562141 | orchestrator | 2026-04-09 03:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:26.599539 | orchestrator | 2026-04-09 03:30:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:26.600096 | orchestrator | 2026-04-09 03:30:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:26.600128 | orchestrator | 2026-04-09 03:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:29.647847 | orchestrator | 2026-04-09 03:30:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:29.648848 | orchestrator | 2026-04-09 03:30:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:29.648879 | orchestrator | 2026-04-09 03:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:32.699680 | orchestrator | 2026-04-09 03:30:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:32.703224 | orchestrator | 2026-04-09 03:30:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:32.703291 | orchestrator | 2026-04-09 03:30:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:35.753164 | orchestrator | 2026-04-09 03:30:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:35.754289 | orchestrator | 2026-04-09 03:30:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:35.754317 | orchestrator | 2026-04-09 03:30:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:38.807099 | orchestrator | 2026-04-09 03:30:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:38.807979 | orchestrator | 2026-04-09 03:30:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:38.808019 | orchestrator | 2026-04-09 03:30:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:41.860474 | orchestrator | 2026-04-09 03:30:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:41.862103 | orchestrator | 2026-04-09 03:30:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:41.862161 | orchestrator | 2026-04-09 03:30:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:44.912913 | orchestrator | 2026-04-09 03:30:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:44.913346 | orchestrator | 2026-04-09 03:30:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:44.913379 | orchestrator | 2026-04-09 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:47.965120 | orchestrator | 2026-04-09 03:30:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:47.965945 | orchestrator | 2026-04-09 03:30:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:47.966080 | orchestrator | 2026-04-09 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:51.016787 | orchestrator | 2026-04-09 03:30:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:51.018759 | orchestrator | 2026-04-09 03:30:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:51.018819 | orchestrator | 2026-04-09 03:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:54.065218 | orchestrator | 2026-04-09 03:30:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:54.067193 | orchestrator | 2026-04-09 03:30:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:54.067252 | orchestrator | 2026-04-09 03:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:30:57.116474 | orchestrator | 2026-04-09 03:30:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:30:57.118466 | orchestrator | 2026-04-09 03:30:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:30:57.118520 | orchestrator | 2026-04-09 03:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:00.170659 | orchestrator | 2026-04-09 03:31:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:00.172326 | orchestrator | 2026-04-09 03:31:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:00.172492 | orchestrator | 2026-04-09 03:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:03.214526 | orchestrator | 2026-04-09 03:31:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:03.215247 | orchestrator | 2026-04-09 03:31:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:03.215332 | orchestrator | 2026-04-09 03:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:06.259033 | orchestrator | 2026-04-09 03:31:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:06.260296 | orchestrator | 2026-04-09 03:31:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:06.260811 | orchestrator | 2026-04-09 03:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:09.308379 | orchestrator | 2026-04-09 03:31:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:09.310526 | orchestrator | 2026-04-09 03:31:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:09.310650 | orchestrator | 2026-04-09 03:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:12.364106 | orchestrator | 2026-04-09 03:31:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:12.365802 | orchestrator | 2026-04-09 03:31:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:12.365857 | orchestrator | 2026-04-09 03:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:15.421920 | orchestrator | 2026-04-09 03:31:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:15.422820 | orchestrator | 2026-04-09 03:31:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:15.422851 | orchestrator | 2026-04-09 03:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:18.481102 | orchestrator | 2026-04-09 03:31:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:18.481694 | orchestrator | 2026-04-09 03:31:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:18.481815 | orchestrator | 2026-04-09 03:31:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:21.531458 | orchestrator | 2026-04-09 03:31:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:21.532101 | orchestrator | 2026-04-09 03:31:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:21.532156 | orchestrator | 2026-04-09 03:31:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:24.585589 | orchestrator | 2026-04-09 03:31:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:24.587586 | orchestrator | 2026-04-09 03:31:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:24.587637 | orchestrator | 2026-04-09 03:31:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:27.642437 | orchestrator | 2026-04-09 03:31:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:27.644824 | orchestrator | 2026-04-09 03:31:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:27.645044 | orchestrator | 2026-04-09 03:31:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:30.694256 | orchestrator | 2026-04-09 03:31:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:30.695565 | orchestrator | 2026-04-09 03:31:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:30.695657 | orchestrator | 2026-04-09 03:31:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:33.752625 | orchestrator | 2026-04-09 03:31:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:33.755781 | orchestrator | 2026-04-09 03:31:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:33.755892 | orchestrator | 2026-04-09 03:31:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:36.807517 | orchestrator | 2026-04-09 03:31:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:36.809038 | orchestrator | 2026-04-09 03:31:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:36.809066 | orchestrator | 2026-04-09 03:31:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:39.854344 | orchestrator | 2026-04-09 03:31:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:39.854712 | orchestrator | 2026-04-09 03:31:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:39.854745 | orchestrator | 2026-04-09 03:31:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:42.898510 | orchestrator | 2026-04-09 03:31:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:42.899643 | orchestrator | 2026-04-09 03:31:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:42.899672 | orchestrator | 2026-04-09 03:31:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:45.965440 | orchestrator | 2026-04-09 03:31:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:45.966687 | orchestrator | 2026-04-09 03:31:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:45.966718 | orchestrator | 2026-04-09 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:49.032516 | orchestrator | 2026-04-09 03:31:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:49.034083 | orchestrator | 2026-04-09 03:31:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:49.034138 | orchestrator | 2026-04-09 03:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:52.079405 | orchestrator | 2026-04-09 03:31:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:52.080704 | orchestrator | 2026-04-09 03:31:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:52.080758 | orchestrator | 2026-04-09 03:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:55.135197 | orchestrator | 2026-04-09 03:31:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:55.138137 | orchestrator | 2026-04-09 03:31:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:55.138224 | orchestrator | 2026-04-09 03:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:31:58.190564 | orchestrator | 2026-04-09 03:31:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:31:58.191145 | orchestrator | 2026-04-09 03:31:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:31:58.191189 | orchestrator | 2026-04-09 03:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:01.234708 | orchestrator | 2026-04-09 03:32:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:01.236481 | orchestrator | 2026-04-09 03:32:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:01.236545 | orchestrator | 2026-04-09 03:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:04.288345 | orchestrator | 2026-04-09 03:32:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:04.290085 | orchestrator | 2026-04-09 03:32:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:04.290169 | orchestrator | 2026-04-09 03:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:07.340207 | orchestrator | 2026-04-09 03:32:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:07.341607 | orchestrator | 2026-04-09 03:32:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:07.341677 | orchestrator | 2026-04-09 03:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:10.395634 | orchestrator | 2026-04-09 03:32:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:10.396336 | orchestrator | 2026-04-09 03:32:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:10.396653 | orchestrator | 2026-04-09 03:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:13.438773 | orchestrator | 2026-04-09 03:32:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:13.440234 | orchestrator | 2026-04-09 03:32:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:13.440323 | orchestrator | 2026-04-09 03:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:16.484292 | orchestrator | 2026-04-09 03:32:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:16.485973 | orchestrator | 2026-04-09 03:32:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:16.486080 | orchestrator | 2026-04-09 03:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:19.536754 | orchestrator | 2026-04-09 03:32:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:19.539064 | orchestrator | 2026-04-09 03:32:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:19.539149 | orchestrator | 2026-04-09 03:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:22.585740 | orchestrator | 2026-04-09 03:32:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:22.587395 | orchestrator | 2026-04-09 03:32:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:22.587423 | orchestrator | 2026-04-09 03:32:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:25.641347 | orchestrator | 2026-04-09 03:32:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:25.643289 | orchestrator | 2026-04-09 03:32:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:25.643381 | orchestrator | 2026-04-09 03:32:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:28.698732 | orchestrator | 2026-04-09 03:32:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:28.698982 | orchestrator | 2026-04-09 03:32:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:28.699014 | orchestrator | 2026-04-09 03:32:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:31.752936 | orchestrator | 2026-04-09 03:32:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:31.756420 | orchestrator | 2026-04-09 03:32:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:31.757119 | orchestrator | 2026-04-09 03:32:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:34.800842 | orchestrator | 2026-04-09 03:32:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:34.802009 | orchestrator | 2026-04-09 03:32:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:34.802346 | orchestrator | 2026-04-09 03:32:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:37.841909 | orchestrator | 2026-04-09 03:32:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:37.842479 | orchestrator | 2026-04-09 03:32:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:37.842523 | orchestrator | 2026-04-09 03:32:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:40.891792 | orchestrator | 2026-04-09 03:32:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:40.893985 | orchestrator | 2026-04-09 03:32:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:40.894085 | orchestrator | 2026-04-09 03:32:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:43.940674 | orchestrator | 2026-04-09 03:32:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:43.941983 | orchestrator | 2026-04-09 03:32:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:43.942202 | orchestrator | 2026-04-09 03:32:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:46.993901 | orchestrator | 2026-04-09 03:32:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:46.995719 | orchestrator | 2026-04-09 03:32:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:46.995792 | orchestrator | 2026-04-09 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:50.055860 | orchestrator | 2026-04-09 03:32:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:50.057544 | orchestrator | 2026-04-09 03:32:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:50.057603 | orchestrator | 2026-04-09 03:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:53.097504 | orchestrator | 2026-04-09 03:32:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:53.097926 | orchestrator | 2026-04-09 03:32:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:53.097950 | orchestrator | 2026-04-09 03:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:56.140106 | orchestrator | 2026-04-09 03:32:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:56.142122 | orchestrator | 2026-04-09 03:32:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:56.142176 | orchestrator | 2026-04-09 03:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:32:59.187596 | orchestrator | 2026-04-09 03:32:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:32:59.188840 | orchestrator | 2026-04-09 03:32:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:32:59.188868 | orchestrator | 2026-04-09 03:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:02.245624 | orchestrator | 2026-04-09 03:33:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:02.247063 | orchestrator | 2026-04-09 03:33:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:02.247114 | orchestrator | 2026-04-09 03:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:05.294713 | orchestrator | 2026-04-09 03:33:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:05.295253 | orchestrator | 2026-04-09 03:33:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:05.295278 | orchestrator | 2026-04-09 03:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:08.342455 | orchestrator | 2026-04-09 03:33:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:08.343808 | orchestrator | 2026-04-09 03:33:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:08.343852 | orchestrator | 2026-04-09 03:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:11.391270 | orchestrator | 2026-04-09 03:33:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:11.393307 | orchestrator | 2026-04-09 03:33:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:11.393416 | orchestrator | 2026-04-09 03:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:14.441747 | orchestrator | 2026-04-09 03:33:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:14.442794 | orchestrator | 2026-04-09 03:33:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:14.442849 | orchestrator | 2026-04-09 03:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:17.496975 | orchestrator | 2026-04-09 03:33:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:17.498604 | orchestrator | 2026-04-09 03:33:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:17.498649 | orchestrator | 2026-04-09 03:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:20.548762 | orchestrator | 2026-04-09 03:33:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:20.552296 | orchestrator | 2026-04-09 03:33:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:20.552360 | orchestrator | 2026-04-09 03:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:23.612819 | orchestrator | 2026-04-09 03:33:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:23.614648 | orchestrator | 2026-04-09 03:33:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:23.614697 | orchestrator | 2026-04-09 03:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:26.661848 | orchestrator | 2026-04-09 03:33:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:26.664175 | orchestrator | 2026-04-09 03:33:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:26.664230 | orchestrator | 2026-04-09 03:33:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:29.712578 | orchestrator | 2026-04-09 03:33:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:29.715697 | orchestrator | 2026-04-09 03:33:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:29.715772 | orchestrator | 2026-04-09 03:33:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:32.770291 | orchestrator | 2026-04-09 03:33:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:32.773981 | orchestrator | 2026-04-09 03:33:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:32.774143 | orchestrator | 2026-04-09 03:33:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:35.830220 | orchestrator | 2026-04-09 03:33:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:35.832813 | orchestrator | 2026-04-09 03:33:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:35.832903 | orchestrator | 2026-04-09 03:33:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:38.891359 | orchestrator | 2026-04-09 03:33:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:38.893358 | orchestrator | 2026-04-09 03:33:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:38.893431 | orchestrator | 2026-04-09 03:33:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:41.946173 | orchestrator | 2026-04-09 03:33:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:41.949117 | orchestrator | 2026-04-09 03:33:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:41.949198 | orchestrator | 2026-04-09 03:33:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:44.994329 | orchestrator | 2026-04-09 03:33:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:44.996099 | orchestrator | 2026-04-09 03:33:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:44.996127 | orchestrator | 2026-04-09 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:48.048451 | orchestrator | 2026-04-09 03:33:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:48.051262 | orchestrator | 2026-04-09 03:33:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:48.051324 | orchestrator | 2026-04-09 03:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:51.092180 | orchestrator | 2026-04-09 03:33:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:51.092826 | orchestrator | 2026-04-09 03:33:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:51.092880 | orchestrator | 2026-04-09 03:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:54.141079 | orchestrator | 2026-04-09 03:33:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:54.142377 | orchestrator | 2026-04-09 03:33:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:54.142429 | orchestrator | 2026-04-09 03:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:33:57.190076 | orchestrator | 2026-04-09 03:33:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:33:57.192533 | orchestrator | 2026-04-09 03:33:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:33:57.192590 | orchestrator | 2026-04-09 03:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:00.237408 | orchestrator | 2026-04-09 03:34:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:00.238699 | orchestrator | 2026-04-09 03:34:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:00.238747 | orchestrator | 2026-04-09 03:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:03.279416 | orchestrator | 2026-04-09 03:34:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:03.280600 | orchestrator | 2026-04-09 03:34:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:03.280777 | orchestrator | 2026-04-09 03:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:06.331684 | orchestrator | 2026-04-09 03:34:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:06.333458 | orchestrator | 2026-04-09 03:34:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:06.333570 | orchestrator | 2026-04-09 03:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:09.386625 | orchestrator | 2026-04-09 03:34:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:09.388233 | orchestrator | 2026-04-09 03:34:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:09.388282 | orchestrator | 2026-04-09 03:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:12.440881 | orchestrator | 2026-04-09 03:34:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:12.442609 | orchestrator | 2026-04-09 03:34:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:12.442738 | orchestrator | 2026-04-09 03:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:15.495898 | orchestrator | 2026-04-09 03:34:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:15.497656 | orchestrator | 2026-04-09 03:34:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:15.497725 | orchestrator | 2026-04-09 03:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:18.543453 | orchestrator | 2026-04-09 03:34:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:18.544419 | orchestrator | 2026-04-09 03:34:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:18.544435 | orchestrator | 2026-04-09 03:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:21.593131 | orchestrator | 2026-04-09 03:34:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:21.594206 | orchestrator | 2026-04-09 03:34:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:21.594243 | orchestrator | 2026-04-09 03:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:24.642856 | orchestrator | 2026-04-09 03:34:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:24.644079 | orchestrator | 2026-04-09 03:34:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:24.644160 | orchestrator | 2026-04-09 03:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:27.690879 | orchestrator | 2026-04-09 03:34:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:27.692760 | orchestrator | 2026-04-09 03:34:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:27.692920 | orchestrator | 2026-04-09 03:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:30.736140 | orchestrator | 2026-04-09 03:34:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:30.738534 | orchestrator | 2026-04-09 03:34:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:30.738748 | orchestrator | 2026-04-09 03:34:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:33.780361 | orchestrator | 2026-04-09 03:34:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:33.780845 | orchestrator | 2026-04-09 03:34:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:33.781320 | orchestrator | 2026-04-09 03:34:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:36.837322 | orchestrator | 2026-04-09 03:34:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:36.839755 | orchestrator | 2026-04-09 03:34:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:36.839841 | orchestrator | 2026-04-09 03:34:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:39.885237 | orchestrator | 2026-04-09 03:34:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:39.888205 | orchestrator | 2026-04-09 03:34:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:39.888287 | orchestrator | 2026-04-09 03:34:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:42.938272 | orchestrator | 2026-04-09 03:34:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:42.938733 | orchestrator | 2026-04-09 03:34:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:42.939046 | orchestrator | 2026-04-09 03:34:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:45.989222 | orchestrator | 2026-04-09 03:34:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:45.990583 | orchestrator | 2026-04-09 03:34:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:45.990621 | orchestrator | 2026-04-09 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:49.053269 | orchestrator | 2026-04-09 03:34:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:49.055642 | orchestrator | 2026-04-09 03:34:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:49.056117 | orchestrator | 2026-04-09 03:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:52.109212 | orchestrator | 2026-04-09 03:34:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:52.111099 | orchestrator | 2026-04-09 03:34:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:52.111155 | orchestrator | 2026-04-09 03:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:55.162600 | orchestrator | 2026-04-09 03:34:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:55.164751 | orchestrator | 2026-04-09 03:34:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:55.164806 | orchestrator | 2026-04-09 03:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:34:58.214878 | orchestrator | 2026-04-09 03:34:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:34:58.218088 | orchestrator | 2026-04-09 03:34:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:34:58.218155 | orchestrator | 2026-04-09 03:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:01.267395 | orchestrator | 2026-04-09 03:35:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:01.269514 | orchestrator | 2026-04-09 03:35:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:01.269552 | orchestrator | 2026-04-09 03:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:04.315806 | orchestrator | 2026-04-09 03:35:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:04.317842 | orchestrator | 2026-04-09 03:35:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:04.317902 | orchestrator | 2026-04-09 03:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:07.367223 | orchestrator | 2026-04-09 03:35:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:07.369027 | orchestrator | 2026-04-09 03:35:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:07.369063 | orchestrator | 2026-04-09 03:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:10.419385 | orchestrator | 2026-04-09 03:35:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:10.420905 | orchestrator | 2026-04-09 03:35:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:10.421249 | orchestrator | 2026-04-09 03:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:13.475878 | orchestrator | 2026-04-09 03:35:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:13.477484 | orchestrator | 2026-04-09 03:35:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:13.477566 | orchestrator | 2026-04-09 03:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:16.526338 | orchestrator | 2026-04-09 03:35:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:16.527449 | orchestrator | 2026-04-09 03:35:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:16.527578 | orchestrator | 2026-04-09 03:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:19.568604 | orchestrator | 2026-04-09 03:35:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:19.569903 | orchestrator | 2026-04-09 03:35:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:19.569956 | orchestrator | 2026-04-09 03:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:22.616427 | orchestrator | 2026-04-09 03:35:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:22.617667 | orchestrator | 2026-04-09 03:35:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:22.617762 | orchestrator | 2026-04-09 03:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:25.667297 | orchestrator | 2026-04-09 03:35:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:25.670264 | orchestrator | 2026-04-09 03:35:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:25.670344 | orchestrator | 2026-04-09 03:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:28.719227 | orchestrator | 2026-04-09 03:35:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:28.720914 | orchestrator | 2026-04-09 03:35:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:28.720978 | orchestrator | 2026-04-09 03:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:31.770970 | orchestrator | 2026-04-09 03:35:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:31.772036 | orchestrator | 2026-04-09 03:35:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:31.772292 | orchestrator | 2026-04-09 03:35:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:34.820362 | orchestrator | 2026-04-09 03:35:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:34.821988 | orchestrator | 2026-04-09 03:35:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:34.822112 | orchestrator | 2026-04-09 03:35:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:37.868351 | orchestrator | 2026-04-09 03:35:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:35:37.869663 | orchestrator | 2026-04-09 03:35:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:35:37.869712 | orchestrator | 2026-04-09 03:35:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:35:40.919889 | orchestrator | 2026-04-09 03:35:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:41.027041 | orchestrator | 2026-04-09 03:37:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:41.027155 | orchestrator | 2026-04-09 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:44.063854 | orchestrator | 2026-04-09 03:37:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:44.065278 | orchestrator | 2026-04-09 03:37:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:44.065332 | orchestrator | 2026-04-09 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:47.106454 | orchestrator | 2026-04-09 03:37:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:47.106596 | orchestrator | 2026-04-09 03:37:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:47.106608 | orchestrator | 2026-04-09 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:50.152261 | orchestrator | 2026-04-09 03:37:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:50.156070 | orchestrator | 2026-04-09 03:37:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:50.156176 | orchestrator | 2026-04-09 03:37:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:53.205969 | orchestrator | 2026-04-09 03:37:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:53.208184 | orchestrator | 2026-04-09 03:37:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:53.208267 | orchestrator | 2026-04-09 03:37:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:56.255733 | orchestrator | 2026-04-09 03:37:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:56.257164 | orchestrator | 2026-04-09 03:37:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:56.257212 | orchestrator | 2026-04-09 03:37:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:37:59.307310 | orchestrator | 2026-04-09 03:37:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:37:59.308311 | orchestrator | 2026-04-09 03:37:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:37:59.308349 | orchestrator | 2026-04-09 03:37:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:02.347066 | orchestrator | 2026-04-09 03:38:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:02.348987 | orchestrator | 2026-04-09 03:38:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:02.349046 | orchestrator | 2026-04-09 03:38:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:05.386193 | orchestrator | 2026-04-09 03:38:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:05.387934 | orchestrator | 2026-04-09 03:38:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:05.387971 | orchestrator | 2026-04-09 03:38:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:08.432366 | orchestrator | 2026-04-09 03:38:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:08.434435 | orchestrator | 2026-04-09 03:38:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:08.434571 | orchestrator | 2026-04-09 03:38:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:11.479970 | orchestrator | 2026-04-09 03:38:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:11.481943 | orchestrator | 2026-04-09 03:38:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:11.481986 | orchestrator | 2026-04-09 03:38:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:14.530722 | orchestrator | 2026-04-09 03:38:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:14.532419 | orchestrator | 2026-04-09 03:38:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:14.532479 | orchestrator | 2026-04-09 03:38:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:17.569296 | orchestrator | 2026-04-09 03:38:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:17.570472 | orchestrator | 2026-04-09 03:38:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:17.570520 | orchestrator | 2026-04-09 03:38:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:20.613623 | orchestrator | 2026-04-09 03:38:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:20.615888 | orchestrator | 2026-04-09 03:38:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:20.615922 | orchestrator | 2026-04-09 03:38:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:23.664328 | orchestrator | 2026-04-09 03:38:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:23.666488 | orchestrator | 2026-04-09 03:38:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:23.666533 | orchestrator | 2026-04-09 03:38:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:26.713095 | orchestrator | 2026-04-09 03:38:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:26.715852 | orchestrator | 2026-04-09 03:38:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:26.715944 | orchestrator | 2026-04-09 03:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:29.763024 | orchestrator | 2026-04-09 03:38:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:29.764493 | orchestrator | 2026-04-09 03:38:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:29.764536 | orchestrator | 2026-04-09 03:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:32.807351 | orchestrator | 2026-04-09 03:38:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:32.809853 | orchestrator | 2026-04-09 03:38:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:32.810139 | orchestrator | 2026-04-09 03:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:35.855158 | orchestrator | 2026-04-09 03:38:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:35.856757 | orchestrator | 2026-04-09 03:38:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:35.856895 | orchestrator | 2026-04-09 03:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:38.901625 | orchestrator | 2026-04-09 03:38:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:38.902697 | orchestrator | 2026-04-09 03:38:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:38.902749 | orchestrator | 2026-04-09 03:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:41.944257 | orchestrator | 2026-04-09 03:38:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:41.945815 | orchestrator | 2026-04-09 03:38:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:41.945876 | orchestrator | 2026-04-09 03:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:44.992464 | orchestrator | 2026-04-09 03:38:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:44.994803 | orchestrator | 2026-04-09 03:38:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:44.994879 | orchestrator | 2026-04-09 03:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:48.037086 | orchestrator | 2026-04-09 03:38:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:48.040325 | orchestrator | 2026-04-09 03:38:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:48.040658 | orchestrator | 2026-04-09 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:51.086502 | orchestrator | 2026-04-09 03:38:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:51.087206 | orchestrator | 2026-04-09 03:38:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:51.087443 | orchestrator | 2026-04-09 03:38:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:54.135280 | orchestrator | 2026-04-09 03:38:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:54.137049 | orchestrator | 2026-04-09 03:38:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:54.137075 | orchestrator | 2026-04-09 03:38:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:38:57.182396 | orchestrator | 2026-04-09 03:38:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:38:57.184117 | orchestrator | 2026-04-09 03:38:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:38:57.184184 | orchestrator | 2026-04-09 03:38:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:00.228083 | orchestrator | 2026-04-09 03:39:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:00.229252 | orchestrator | 2026-04-09 03:39:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:00.229334 | orchestrator | 2026-04-09 03:39:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:03.276846 | orchestrator | 2026-04-09 03:39:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:03.278745 | orchestrator | 2026-04-09 03:39:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:03.278791 | orchestrator | 2026-04-09 03:39:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:06.323164 | orchestrator | 2026-04-09 03:39:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:06.324686 | orchestrator | 2026-04-09 03:39:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:06.324773 | orchestrator | 2026-04-09 03:39:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:09.371937 | orchestrator | 2026-04-09 03:39:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:09.374098 | orchestrator | 2026-04-09 03:39:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:09.374151 | orchestrator | 2026-04-09 03:39:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:12.429519 | orchestrator | 2026-04-09 03:39:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:12.431321 | orchestrator | 2026-04-09 03:39:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:12.431379 | orchestrator | 2026-04-09 03:39:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:15.483788 | orchestrator | 2026-04-09 03:39:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:15.486011 | orchestrator | 2026-04-09 03:39:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:15.486174 | orchestrator | 2026-04-09 03:39:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:18.531203 | orchestrator | 2026-04-09 03:39:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:18.531417 | orchestrator | 2026-04-09 03:39:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:18.531446 | orchestrator | 2026-04-09 03:39:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:21.572184 | orchestrator | 2026-04-09 03:39:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:21.574604 | orchestrator | 2026-04-09 03:39:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:21.574658 | orchestrator | 2026-04-09 03:39:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:24.614240 | orchestrator | 2026-04-09 03:39:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:24.615637 | orchestrator | 2026-04-09 03:39:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:24.615700 | orchestrator | 2026-04-09 03:39:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:27.660363 | orchestrator | 2026-04-09 03:39:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:27.662615 | orchestrator | 2026-04-09 03:39:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:27.662700 | orchestrator | 2026-04-09 03:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:30.709506 | orchestrator | 2026-04-09 03:39:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:30.711259 | orchestrator | 2026-04-09 03:39:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:30.711396 | orchestrator | 2026-04-09 03:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:33.752983 | orchestrator | 2026-04-09 03:39:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:33.754949 | orchestrator | 2026-04-09 03:39:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:33.755007 | orchestrator | 2026-04-09 03:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:36.796238 | orchestrator | 2026-04-09 03:39:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:36.798592 | orchestrator | 2026-04-09 03:39:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:36.798655 | orchestrator | 2026-04-09 03:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:39.842666 | orchestrator | 2026-04-09 03:39:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:39.844475 | orchestrator | 2026-04-09 03:39:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:39.844524 | orchestrator | 2026-04-09 03:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:42.889703 | orchestrator | 2026-04-09 03:39:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:42.891880 | orchestrator | 2026-04-09 03:39:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:42.891943 | orchestrator | 2026-04-09 03:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:45.944048 | orchestrator | 2026-04-09 03:39:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:45.945186 | orchestrator | 2026-04-09 03:39:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:45.945236 | orchestrator | 2026-04-09 03:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:48.992313 | orchestrator | 2026-04-09 03:39:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:48.995693 | orchestrator | 2026-04-09 03:39:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:48.995744 | orchestrator | 2026-04-09 03:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:52.034540 | orchestrator | 2026-04-09 03:39:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:52.036679 | orchestrator | 2026-04-09 03:39:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:52.036747 | orchestrator | 2026-04-09 03:39:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:55.082714 | orchestrator | 2026-04-09 03:39:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:55.083576 | orchestrator | 2026-04-09 03:39:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:55.083688 | orchestrator | 2026-04-09 03:39:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:39:58.132283 | orchestrator | 2026-04-09 03:39:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:39:58.134263 | orchestrator | 2026-04-09 03:39:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:39:58.134300 | orchestrator | 2026-04-09 03:39:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:01.179143 | orchestrator | 2026-04-09 03:40:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:01.180954 | orchestrator | 2026-04-09 03:40:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:01.180996 | orchestrator | 2026-04-09 03:40:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:04.226657 | orchestrator | 2026-04-09 03:40:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:04.228056 | orchestrator | 2026-04-09 03:40:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:04.228167 | orchestrator | 2026-04-09 03:40:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:07.272824 | orchestrator | 2026-04-09 03:40:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:07.274232 | orchestrator | 2026-04-09 03:40:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:07.274271 | orchestrator | 2026-04-09 03:40:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:10.314228 | orchestrator | 2026-04-09 03:40:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:10.316447 | orchestrator | 2026-04-09 03:40:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:10.316647 | orchestrator | 2026-04-09 03:40:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:13.361010 | orchestrator | 2026-04-09 03:40:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:13.363348 | orchestrator | 2026-04-09 03:40:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:13.363401 | orchestrator | 2026-04-09 03:40:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:16.409068 | orchestrator | 2026-04-09 03:40:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:16.412181 | orchestrator | 2026-04-09 03:40:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:16.412277 | orchestrator | 2026-04-09 03:40:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:19.458614 | orchestrator | 2026-04-09 03:40:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:19.462327 | orchestrator | 2026-04-09 03:40:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:19.462479 | orchestrator | 2026-04-09 03:40:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:22.507113 | orchestrator | 2026-04-09 03:40:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:22.509453 | orchestrator | 2026-04-09 03:40:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:22.509575 | orchestrator | 2026-04-09 03:40:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:25.556062 | orchestrator | 2026-04-09 03:40:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:25.556517 | orchestrator | 2026-04-09 03:40:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:25.556541 | orchestrator | 2026-04-09 03:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:28.601726 | orchestrator | 2026-04-09 03:40:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:28.602306 | orchestrator | 2026-04-09 03:40:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:28.602346 | orchestrator | 2026-04-09 03:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:31.644621 | orchestrator | 2026-04-09 03:40:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:31.646605 | orchestrator | 2026-04-09 03:40:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:31.646633 | orchestrator | 2026-04-09 03:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:34.702630 | orchestrator | 2026-04-09 03:40:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:34.704110 | orchestrator | 2026-04-09 03:40:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:34.704290 | orchestrator | 2026-04-09 03:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:37.746958 | orchestrator | 2026-04-09 03:40:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:37.749642 | orchestrator | 2026-04-09 03:40:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:37.749999 | orchestrator | 2026-04-09 03:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:40.796432 | orchestrator | 2026-04-09 03:40:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:40.798900 | orchestrator | 2026-04-09 03:40:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:40.798954 | orchestrator | 2026-04-09 03:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:43.850416 | orchestrator | 2026-04-09 03:40:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:43.851625 | orchestrator | 2026-04-09 03:40:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:43.852018 | orchestrator | 2026-04-09 03:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:46.893290 | orchestrator | 2026-04-09 03:40:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:46.894864 | orchestrator | 2026-04-09 03:40:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:46.894900 | orchestrator | 2026-04-09 03:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:49.934539 | orchestrator | 2026-04-09 03:40:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:49.936122 | orchestrator | 2026-04-09 03:40:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:49.936192 | orchestrator | 2026-04-09 03:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:52.980591 | orchestrator | 2026-04-09 03:40:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:52.981499 | orchestrator | 2026-04-09 03:40:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:52.981553 | orchestrator | 2026-04-09 03:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:56.027477 | orchestrator | 2026-04-09 03:40:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:56.029758 | orchestrator | 2026-04-09 03:40:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:56.029895 | orchestrator | 2026-04-09 03:40:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:40:59.068920 | orchestrator | 2026-04-09 03:40:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:40:59.070301 | orchestrator | 2026-04-09 03:40:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:40:59.070428 | orchestrator | 2026-04-09 03:40:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:02.115317 | orchestrator | 2026-04-09 03:41:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:02.118290 | orchestrator | 2026-04-09 03:41:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:02.118343 | orchestrator | 2026-04-09 03:41:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:05.162091 | orchestrator | 2026-04-09 03:41:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:05.163016 | orchestrator | 2026-04-09 03:41:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:05.163055 | orchestrator | 2026-04-09 03:41:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:08.204166 | orchestrator | 2026-04-09 03:41:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:08.206353 | orchestrator | 2026-04-09 03:41:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:08.207098 | orchestrator | 2026-04-09 03:41:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:11.252201 | orchestrator | 2026-04-09 03:41:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:11.253849 | orchestrator | 2026-04-09 03:41:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:11.253886 | orchestrator | 2026-04-09 03:41:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:14.303978 | orchestrator | 2026-04-09 03:41:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:14.306115 | orchestrator | 2026-04-09 03:41:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:14.306169 | orchestrator | 2026-04-09 03:41:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:17.350376 | orchestrator | 2026-04-09 03:41:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:17.350614 | orchestrator | 2026-04-09 03:41:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:17.350649 | orchestrator | 2026-04-09 03:41:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:20.391834 | orchestrator | 2026-04-09 03:41:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:20.392851 | orchestrator | 2026-04-09 03:41:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:20.392872 | orchestrator | 2026-04-09 03:41:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:23.439576 | orchestrator | 2026-04-09 03:41:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:23.440953 | orchestrator | 2026-04-09 03:41:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:23.441014 | orchestrator | 2026-04-09 03:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:26.481695 | orchestrator | 2026-04-09 03:41:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:26.483506 | orchestrator | 2026-04-09 03:41:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:26.483537 | orchestrator | 2026-04-09 03:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:29.530394 | orchestrator | 2026-04-09 03:41:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:29.532709 | orchestrator | 2026-04-09 03:41:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:29.532753 | orchestrator | 2026-04-09 03:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:32.576254 | orchestrator | 2026-04-09 03:41:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:32.579285 | orchestrator | 2026-04-09 03:41:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:32.579387 | orchestrator | 2026-04-09 03:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:35.629660 | orchestrator | 2026-04-09 03:41:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:35.629764 | orchestrator | 2026-04-09 03:41:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:35.629781 | orchestrator | 2026-04-09 03:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:38.663871 | orchestrator | 2026-04-09 03:41:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:38.665225 | orchestrator | 2026-04-09 03:41:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:38.665455 | orchestrator | 2026-04-09 03:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:41.701273 | orchestrator | 2026-04-09 03:41:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:41.702883 | orchestrator | 2026-04-09 03:41:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:41.702937 | orchestrator | 2026-04-09 03:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:44.759630 | orchestrator | 2026-04-09 03:41:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:44.761004 | orchestrator | 2026-04-09 03:41:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:44.761054 | orchestrator | 2026-04-09 03:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:47.806447 | orchestrator | 2026-04-09 03:41:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:47.809016 | orchestrator | 2026-04-09 03:41:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:47.809065 | orchestrator | 2026-04-09 03:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:50.859921 | orchestrator | 2026-04-09 03:41:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:50.861860 | orchestrator | 2026-04-09 03:41:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:50.861901 | orchestrator | 2026-04-09 03:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:53.901988 | orchestrator | 2026-04-09 03:41:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:53.902314 | orchestrator | 2026-04-09 03:41:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:53.902407 | orchestrator | 2026-04-09 03:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:56.943018 | orchestrator | 2026-04-09 03:41:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:56.944892 | orchestrator | 2026-04-09 03:41:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:56.944958 | orchestrator | 2026-04-09 03:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:41:59.993319 | orchestrator | 2026-04-09 03:41:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:41:59.994873 | orchestrator | 2026-04-09 03:41:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:41:59.994969 | orchestrator | 2026-04-09 03:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:03.036880 | orchestrator | 2026-04-09 03:42:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:03.039150 | orchestrator | 2026-04-09 03:42:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:03.039220 | orchestrator | 2026-04-09 03:42:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:06.091897 | orchestrator | 2026-04-09 03:42:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:06.093352 | orchestrator | 2026-04-09 03:42:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:06.093440 | orchestrator | 2026-04-09 03:42:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:09.134700 | orchestrator | 2026-04-09 03:42:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:09.135734 | orchestrator | 2026-04-09 03:42:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:09.135764 | orchestrator | 2026-04-09 03:42:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:12.182221 | orchestrator | 2026-04-09 03:42:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:12.184697 | orchestrator | 2026-04-09 03:42:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:12.184731 | orchestrator | 2026-04-09 03:42:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:15.233874 | orchestrator | 2026-04-09 03:42:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:15.235090 | orchestrator | 2026-04-09 03:42:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:15.235146 | orchestrator | 2026-04-09 03:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:18.282777 | orchestrator | 2026-04-09 03:42:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:18.286658 | orchestrator | 2026-04-09 03:42:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:18.286771 | orchestrator | 2026-04-09 03:42:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:21.338254 | orchestrator | 2026-04-09 03:42:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:21.341199 | orchestrator | 2026-04-09 03:42:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:21.341271 | orchestrator | 2026-04-09 03:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:24.388492 | orchestrator | 2026-04-09 03:42:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:24.390739 | orchestrator | 2026-04-09 03:42:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:24.390806 | orchestrator | 2026-04-09 03:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:27.432467 | orchestrator | 2026-04-09 03:42:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:27.435258 | orchestrator | 2026-04-09 03:42:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:27.435349 | orchestrator | 2026-04-09 03:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:30.474336 | orchestrator | 2026-04-09 03:42:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:30.477160 | orchestrator | 2026-04-09 03:42:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:30.477239 | orchestrator | 2026-04-09 03:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:33.515041 | orchestrator | 2026-04-09 03:42:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:33.517766 | orchestrator | 2026-04-09 03:42:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:33.517849 | orchestrator | 2026-04-09 03:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:36.565572 | orchestrator | 2026-04-09 03:42:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:36.566227 | orchestrator | 2026-04-09 03:42:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:36.566282 | orchestrator | 2026-04-09 03:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:39.609287 | orchestrator | 2026-04-09 03:42:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:39.611963 | orchestrator | 2026-04-09 03:42:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:39.612025 | orchestrator | 2026-04-09 03:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:42.659888 | orchestrator | 2026-04-09 03:42:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:42.661322 | orchestrator | 2026-04-09 03:42:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:42.661538 | orchestrator | 2026-04-09 03:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:45.714389 | orchestrator | 2026-04-09 03:42:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:45.716578 | orchestrator | 2026-04-09 03:42:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:45.716729 | orchestrator | 2026-04-09 03:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:48.765221 | orchestrator | 2026-04-09 03:42:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:48.766518 | orchestrator | 2026-04-09 03:42:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:48.766589 | orchestrator | 2026-04-09 03:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:51.814230 | orchestrator | 2026-04-09 03:42:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:51.816550 | orchestrator | 2026-04-09 03:42:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:51.816798 | orchestrator | 2026-04-09 03:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:54.865898 | orchestrator | 2026-04-09 03:42:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:54.867851 | orchestrator | 2026-04-09 03:42:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:54.867912 | orchestrator | 2026-04-09 03:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:42:57.911877 | orchestrator | 2026-04-09 03:42:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:42:57.914159 | orchestrator | 2026-04-09 03:42:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:42:57.914218 | orchestrator | 2026-04-09 03:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:00.967281 | orchestrator | 2026-04-09 03:43:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:00.968945 | orchestrator | 2026-04-09 03:43:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:00.969000 | orchestrator | 2026-04-09 03:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:04.016618 | orchestrator | 2026-04-09 03:43:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:04.017711 | orchestrator | 2026-04-09 03:43:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:04.017770 | orchestrator | 2026-04-09 03:43:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:07.075713 | orchestrator | 2026-04-09 03:43:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:07.077834 | orchestrator | 2026-04-09 03:43:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:07.077892 | orchestrator | 2026-04-09 03:43:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:10.119234 | orchestrator | 2026-04-09 03:43:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:10.120264 | orchestrator | 2026-04-09 03:43:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:10.120328 | orchestrator | 2026-04-09 03:43:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:13.169592 | orchestrator | 2026-04-09 03:43:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:13.172710 | orchestrator | 2026-04-09 03:43:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:13.172771 | orchestrator | 2026-04-09 03:43:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:16.214555 | orchestrator | 2026-04-09 03:43:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:16.214995 | orchestrator | 2026-04-09 03:43:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:16.215591 | orchestrator | 2026-04-09 03:43:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:19.259951 | orchestrator | 2026-04-09 03:43:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:19.263167 | orchestrator | 2026-04-09 03:43:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:19.263399 | orchestrator | 2026-04-09 03:43:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:22.313317 | orchestrator | 2026-04-09 03:43:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:22.315838 | orchestrator | 2026-04-09 03:43:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:22.315899 | orchestrator | 2026-04-09 03:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:25.361018 | orchestrator | 2026-04-09 03:43:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:25.362616 | orchestrator | 2026-04-09 03:43:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:25.362686 | orchestrator | 2026-04-09 03:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:28.406915 | orchestrator | 2026-04-09 03:43:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:28.408499 | orchestrator | 2026-04-09 03:43:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:28.408586 | orchestrator | 2026-04-09 03:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:31.457878 | orchestrator | 2026-04-09 03:43:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:31.459334 | orchestrator | 2026-04-09 03:43:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:31.459376 | orchestrator | 2026-04-09 03:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:34.501076 | orchestrator | 2026-04-09 03:43:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:34.503088 | orchestrator | 2026-04-09 03:43:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:34.503165 | orchestrator | 2026-04-09 03:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:37.549457 | orchestrator | 2026-04-09 03:43:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:37.552040 | orchestrator | 2026-04-09 03:43:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:37.552202 | orchestrator | 2026-04-09 03:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:40.593634 | orchestrator | 2026-04-09 03:43:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:40.595385 | orchestrator | 2026-04-09 03:43:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:40.595442 | orchestrator | 2026-04-09 03:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:43.640695 | orchestrator | 2026-04-09 03:43:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:43.642605 | orchestrator | 2026-04-09 03:43:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:43.642649 | orchestrator | 2026-04-09 03:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:46.680818 | orchestrator | 2026-04-09 03:43:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:46.683595 | orchestrator | 2026-04-09 03:43:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:46.683665 | orchestrator | 2026-04-09 03:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:49.727469 | orchestrator | 2026-04-09 03:43:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:49.729572 | orchestrator | 2026-04-09 03:43:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:49.729638 | orchestrator | 2026-04-09 03:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:52.763789 | orchestrator | 2026-04-09 03:43:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:52.765155 | orchestrator | 2026-04-09 03:43:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:52.765192 | orchestrator | 2026-04-09 03:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:55.811935 | orchestrator | 2026-04-09 03:43:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:55.816327 | orchestrator | 2026-04-09 03:43:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:55.816514 | orchestrator | 2026-04-09 03:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:43:58.875700 | orchestrator | 2026-04-09 03:43:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:43:58.877477 | orchestrator | 2026-04-09 03:43:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:43:58.877614 | orchestrator | 2026-04-09 03:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:01.933352 | orchestrator | 2026-04-09 03:44:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:01.935635 | orchestrator | 2026-04-09 03:44:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:01.935780 | orchestrator | 2026-04-09 03:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:04.973992 | orchestrator | 2026-04-09 03:44:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:04.977409 | orchestrator | 2026-04-09 03:44:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:04.977473 | orchestrator | 2026-04-09 03:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:08.027538 | orchestrator | 2026-04-09 03:44:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:08.029381 | orchestrator | 2026-04-09 03:44:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:08.029423 | orchestrator | 2026-04-09 03:44:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:11.073759 | orchestrator | 2026-04-09 03:44:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:11.080972 | orchestrator | 2026-04-09 03:44:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:11.082924 | orchestrator | 2026-04-09 03:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:14.129149 | orchestrator | 2026-04-09 03:44:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:14.131238 | orchestrator | 2026-04-09 03:44:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:14.131297 | orchestrator | 2026-04-09 03:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:17.176980 | orchestrator | 2026-04-09 03:44:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:17.179329 | orchestrator | 2026-04-09 03:44:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:17.179416 | orchestrator | 2026-04-09 03:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:20.222118 | orchestrator | 2026-04-09 03:44:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:20.224388 | orchestrator | 2026-04-09 03:44:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:20.224575 | orchestrator | 2026-04-09 03:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:23.267836 | orchestrator | 2026-04-09 03:44:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:23.270207 | orchestrator | 2026-04-09 03:44:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:23.270266 | orchestrator | 2026-04-09 03:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:26.316115 | orchestrator | 2026-04-09 03:44:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:26.317432 | orchestrator | 2026-04-09 03:44:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:26.317494 | orchestrator | 2026-04-09 03:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:29.360960 | orchestrator | 2026-04-09 03:44:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:29.362463 | orchestrator | 2026-04-09 03:44:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:29.362503 | orchestrator | 2026-04-09 03:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:32.410369 | orchestrator | 2026-04-09 03:44:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:32.412214 | orchestrator | 2026-04-09 03:44:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:32.412294 | orchestrator | 2026-04-09 03:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:35.460155 | orchestrator | 2026-04-09 03:44:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:35.462003 | orchestrator | 2026-04-09 03:44:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:35.462191 | orchestrator | 2026-04-09 03:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:38.500031 | orchestrator | 2026-04-09 03:44:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:38.501349 | orchestrator | 2026-04-09 03:44:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:38.501407 | orchestrator | 2026-04-09 03:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:41.546230 | orchestrator | 2026-04-09 03:44:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:41.548549 | orchestrator | 2026-04-09 03:44:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:41.548600 | orchestrator | 2026-04-09 03:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:44.587115 | orchestrator | 2026-04-09 03:44:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:44.587443 | orchestrator | 2026-04-09 03:44:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:44.587470 | orchestrator | 2026-04-09 03:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:47.634991 | orchestrator | 2026-04-09 03:44:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:47.637208 | orchestrator | 2026-04-09 03:44:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:47.637271 | orchestrator | 2026-04-09 03:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:50.680735 | orchestrator | 2026-04-09 03:44:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:50.682442 | orchestrator | 2026-04-09 03:44:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:50.682513 | orchestrator | 2026-04-09 03:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:53.728660 | orchestrator | 2026-04-09 03:44:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:53.729664 | orchestrator | 2026-04-09 03:44:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:53.729890 | orchestrator | 2026-04-09 03:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:56.782219 | orchestrator | 2026-04-09 03:44:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:56.783571 | orchestrator | 2026-04-09 03:44:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:56.783692 | orchestrator | 2026-04-09 03:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:44:59.833403 | orchestrator | 2026-04-09 03:44:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:44:59.836906 | orchestrator | 2026-04-09 03:44:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:44:59.836977 | orchestrator | 2026-04-09 03:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:02.881326 | orchestrator | 2026-04-09 03:45:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:02.882712 | orchestrator | 2026-04-09 03:45:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:02.882759 | orchestrator | 2026-04-09 03:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:05.929265 | orchestrator | 2026-04-09 03:45:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:05.931310 | orchestrator | 2026-04-09 03:45:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:05.931401 | orchestrator | 2026-04-09 03:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:08.975526 | orchestrator | 2026-04-09 03:45:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:08.977408 | orchestrator | 2026-04-09 03:45:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:08.977506 | orchestrator | 2026-04-09 03:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:12.026411 | orchestrator | 2026-04-09 03:45:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:12.028383 | orchestrator | 2026-04-09 03:45:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:12.028456 | orchestrator | 2026-04-09 03:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:15.084831 | orchestrator | 2026-04-09 03:45:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:15.086421 | orchestrator | 2026-04-09 03:45:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:15.086502 | orchestrator | 2026-04-09 03:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:18.138139 | orchestrator | 2026-04-09 03:45:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:18.139549 | orchestrator | 2026-04-09 03:45:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:18.139583 | orchestrator | 2026-04-09 03:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:21.182966 | orchestrator | 2026-04-09 03:45:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:21.185002 | orchestrator | 2026-04-09 03:45:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:21.185064 | orchestrator | 2026-04-09 03:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:24.232174 | orchestrator | 2026-04-09 03:45:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:24.235662 | orchestrator | 2026-04-09 03:45:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:24.236142 | orchestrator | 2026-04-09 03:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:27.281145 | orchestrator | 2026-04-09 03:45:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:27.282988 | orchestrator | 2026-04-09 03:45:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:27.283345 | orchestrator | 2026-04-09 03:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:30.328455 | orchestrator | 2026-04-09 03:45:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:30.330638 | orchestrator | 2026-04-09 03:45:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:30.330793 | orchestrator | 2026-04-09 03:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:33.378445 | orchestrator | 2026-04-09 03:45:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:33.380513 | orchestrator | 2026-04-09 03:45:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:33.380582 | orchestrator | 2026-04-09 03:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:36.427562 | orchestrator | 2026-04-09 03:45:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:36.429595 | orchestrator | 2026-04-09 03:45:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:36.429713 | orchestrator | 2026-04-09 03:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:39.477221 | orchestrator | 2026-04-09 03:45:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:39.478466 | orchestrator | 2026-04-09 03:45:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:39.478536 | orchestrator | 2026-04-09 03:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:42.516354 | orchestrator | 2026-04-09 03:45:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:42.517963 | orchestrator | 2026-04-09 03:45:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:42.518008 | orchestrator | 2026-04-09 03:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:45.563954 | orchestrator | 2026-04-09 03:45:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:45.566528 | orchestrator | 2026-04-09 03:45:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:45.566623 | orchestrator | 2026-04-09 03:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:48.619260 | orchestrator | 2026-04-09 03:45:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:48.621777 | orchestrator | 2026-04-09 03:45:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:48.621865 | orchestrator | 2026-04-09 03:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:51.673679 | orchestrator | 2026-04-09 03:45:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:51.675689 | orchestrator | 2026-04-09 03:45:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:51.675914 | orchestrator | 2026-04-09 03:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:54.728807 | orchestrator | 2026-04-09 03:45:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:54.732176 | orchestrator | 2026-04-09 03:45:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:54.732247 | orchestrator | 2026-04-09 03:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:45:57.776673 | orchestrator | 2026-04-09 03:45:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:45:57.778751 | orchestrator | 2026-04-09 03:45:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:45:57.778816 | orchestrator | 2026-04-09 03:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:00.824953 | orchestrator | 2026-04-09 03:46:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:00.826863 | orchestrator | 2026-04-09 03:46:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:00.827000 | orchestrator | 2026-04-09 03:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:03.875891 | orchestrator | 2026-04-09 03:46:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:03.877980 | orchestrator | 2026-04-09 03:46:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:03.878082 | orchestrator | 2026-04-09 03:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:06.926874 | orchestrator | 2026-04-09 03:46:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:06.927584 | orchestrator | 2026-04-09 03:46:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:06.927624 | orchestrator | 2026-04-09 03:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:09.978459 | orchestrator | 2026-04-09 03:46:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:09.980862 | orchestrator | 2026-04-09 03:46:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:09.980916 | orchestrator | 2026-04-09 03:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:13.031836 | orchestrator | 2026-04-09 03:46:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:13.033857 | orchestrator | 2026-04-09 03:46:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:13.033897 | orchestrator | 2026-04-09 03:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:16.085658 | orchestrator | 2026-04-09 03:46:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:16.086670 | orchestrator | 2026-04-09 03:46:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:16.086784 | orchestrator | 2026-04-09 03:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:19.135717 | orchestrator | 2026-04-09 03:46:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:19.137574 | orchestrator | 2026-04-09 03:46:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:19.137611 | orchestrator | 2026-04-09 03:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:22.183294 | orchestrator | 2026-04-09 03:46:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:22.183979 | orchestrator | 2026-04-09 03:46:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:22.184016 | orchestrator | 2026-04-09 03:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:25.230679 | orchestrator | 2026-04-09 03:46:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:25.232126 | orchestrator | 2026-04-09 03:46:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:25.232147 | orchestrator | 2026-04-09 03:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:28.273582 | orchestrator | 2026-04-09 03:46:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:28.275649 | orchestrator | 2026-04-09 03:46:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:28.275724 | orchestrator | 2026-04-09 03:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:31.319194 | orchestrator | 2026-04-09 03:46:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:31.320832 | orchestrator | 2026-04-09 03:46:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:31.320912 | orchestrator | 2026-04-09 03:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:34.373145 | orchestrator | 2026-04-09 03:46:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:34.374619 | orchestrator | 2026-04-09 03:46:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:34.374639 | orchestrator | 2026-04-09 03:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:37.424274 | orchestrator | 2026-04-09 03:46:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:37.426180 | orchestrator | 2026-04-09 03:46:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:37.426531 | orchestrator | 2026-04-09 03:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:40.473806 | orchestrator | 2026-04-09 03:46:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:40.475211 | orchestrator | 2026-04-09 03:46:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:40.475237 | orchestrator | 2026-04-09 03:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:43.511527 | orchestrator | 2026-04-09 03:46:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:43.513684 | orchestrator | 2026-04-09 03:46:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:43.513719 | orchestrator | 2026-04-09 03:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:46.555701 | orchestrator | 2026-04-09 03:46:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:46.557521 | orchestrator | 2026-04-09 03:46:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:46.557618 | orchestrator | 2026-04-09 03:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:49.609671 | orchestrator | 2026-04-09 03:46:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:49.611369 | orchestrator | 2026-04-09 03:46:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:49.611419 | orchestrator | 2026-04-09 03:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:52.660258 | orchestrator | 2026-04-09 03:46:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:52.661601 | orchestrator | 2026-04-09 03:46:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:52.661707 | orchestrator | 2026-04-09 03:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:55.703759 | orchestrator | 2026-04-09 03:46:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:55.705493 | orchestrator | 2026-04-09 03:46:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:55.705552 | orchestrator | 2026-04-09 03:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:46:58.747187 | orchestrator | 2026-04-09 03:46:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:46:58.748368 | orchestrator | 2026-04-09 03:46:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:46:58.748405 | orchestrator | 2026-04-09 03:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:01.789519 | orchestrator | 2026-04-09 03:47:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:01.791654 | orchestrator | 2026-04-09 03:47:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:01.791723 | orchestrator | 2026-04-09 03:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:04.839563 | orchestrator | 2026-04-09 03:47:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:04.842479 | orchestrator | 2026-04-09 03:47:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:04.842547 | orchestrator | 2026-04-09 03:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:07.889177 | orchestrator | 2026-04-09 03:47:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:07.892576 | orchestrator | 2026-04-09 03:47:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:07.892668 | orchestrator | 2026-04-09 03:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:10.936742 | orchestrator | 2026-04-09 03:47:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:10.939189 | orchestrator | 2026-04-09 03:47:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:10.939245 | orchestrator | 2026-04-09 03:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:13.992389 | orchestrator | 2026-04-09 03:47:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:13.994675 | orchestrator | 2026-04-09 03:47:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:13.994712 | orchestrator | 2026-04-09 03:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:17.040782 | orchestrator | 2026-04-09 03:47:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:17.042756 | orchestrator | 2026-04-09 03:47:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:17.042865 | orchestrator | 2026-04-09 03:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:20.096339 | orchestrator | 2026-04-09 03:47:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:20.098132 | orchestrator | 2026-04-09 03:47:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:20.098172 | orchestrator | 2026-04-09 03:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:23.143783 | orchestrator | 2026-04-09 03:47:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:23.145988 | orchestrator | 2026-04-09 03:47:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:23.146090 | orchestrator | 2026-04-09 03:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:26.194666 | orchestrator | 2026-04-09 03:47:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:26.196459 | orchestrator | 2026-04-09 03:47:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:26.196520 | orchestrator | 2026-04-09 03:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:29.238551 | orchestrator | 2026-04-09 03:47:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:29.241203 | orchestrator | 2026-04-09 03:47:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:29.241386 | orchestrator | 2026-04-09 03:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:32.276739 | orchestrator | 2026-04-09 03:47:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:32.280068 | orchestrator | 2026-04-09 03:47:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:32.280137 | orchestrator | 2026-04-09 03:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:35.316401 | orchestrator | 2026-04-09 03:47:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:35.317413 | orchestrator | 2026-04-09 03:47:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:35.317459 | orchestrator | 2026-04-09 03:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:38.369054 | orchestrator | 2026-04-09 03:47:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:38.372436 | orchestrator | 2026-04-09 03:47:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:38.372607 | orchestrator | 2026-04-09 03:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:41.413689 | orchestrator | 2026-04-09 03:47:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:41.416273 | orchestrator | 2026-04-09 03:47:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:41.416350 | orchestrator | 2026-04-09 03:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:44.463491 | orchestrator | 2026-04-09 03:47:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:44.466289 | orchestrator | 2026-04-09 03:47:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:44.466486 | orchestrator | 2026-04-09 03:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:47.510660 | orchestrator | 2026-04-09 03:47:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:47.510979 | orchestrator | 2026-04-09 03:47:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:47.511010 | orchestrator | 2026-04-09 03:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:50.556775 | orchestrator | 2026-04-09 03:47:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:50.558734 | orchestrator | 2026-04-09 03:47:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:50.558815 | orchestrator | 2026-04-09 03:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:53.608458 | orchestrator | 2026-04-09 03:47:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:53.609912 | orchestrator | 2026-04-09 03:47:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:53.609957 | orchestrator | 2026-04-09 03:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:56.656095 | orchestrator | 2026-04-09 03:47:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:56.657306 | orchestrator | 2026-04-09 03:47:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:56.657371 | orchestrator | 2026-04-09 03:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:47:59.707528 | orchestrator | 2026-04-09 03:47:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:47:59.709545 | orchestrator | 2026-04-09 03:47:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:47:59.709630 | orchestrator | 2026-04-09 03:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:02.757964 | orchestrator | 2026-04-09 03:48:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:02.761014 | orchestrator | 2026-04-09 03:48:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:02.761192 | orchestrator | 2026-04-09 03:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:05.810697 | orchestrator | 2026-04-09 03:48:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:05.812189 | orchestrator | 2026-04-09 03:48:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:05.812245 | orchestrator | 2026-04-09 03:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:08.853833 | orchestrator | 2026-04-09 03:48:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:08.855588 | orchestrator | 2026-04-09 03:48:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:08.855703 | orchestrator | 2026-04-09 03:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:11.896596 | orchestrator | 2026-04-09 03:48:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:11.898101 | orchestrator | 2026-04-09 03:48:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:11.898155 | orchestrator | 2026-04-09 03:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:14.950653 | orchestrator | 2026-04-09 03:48:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:14.951014 | orchestrator | 2026-04-09 03:48:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:14.951218 | orchestrator | 2026-04-09 03:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:18.014243 | orchestrator | 2026-04-09 03:48:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:18.016287 | orchestrator | 2026-04-09 03:48:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:18.016362 | orchestrator | 2026-04-09 03:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:21.058980 | orchestrator | 2026-04-09 03:48:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:21.062567 | orchestrator | 2026-04-09 03:48:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:21.063074 | orchestrator | 2026-04-09 03:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:24.111810 | orchestrator | 2026-04-09 03:48:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:24.113638 | orchestrator | 2026-04-09 03:48:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:24.113683 | orchestrator | 2026-04-09 03:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:27.165215 | orchestrator | 2026-04-09 03:48:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:27.166836 | orchestrator | 2026-04-09 03:48:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:27.166957 | orchestrator | 2026-04-09 03:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:30.215966 | orchestrator | 2026-04-09 03:48:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:30.219320 | orchestrator | 2026-04-09 03:48:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:30.220406 | orchestrator | 2026-04-09 03:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:33.270603 | orchestrator | 2026-04-09 03:48:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:33.271973 | orchestrator | 2026-04-09 03:48:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:33.272008 | orchestrator | 2026-04-09 03:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:36.322238 | orchestrator | 2026-04-09 03:48:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:36.324239 | orchestrator | 2026-04-09 03:48:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:36.324295 | orchestrator | 2026-04-09 03:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:39.378367 | orchestrator | 2026-04-09 03:48:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:39.381297 | orchestrator | 2026-04-09 03:48:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:39.381394 | orchestrator | 2026-04-09 03:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:42.431468 | orchestrator | 2026-04-09 03:48:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:42.433867 | orchestrator | 2026-04-09 03:48:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:42.434115 | orchestrator | 2026-04-09 03:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:45.486227 | orchestrator | 2026-04-09 03:48:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:45.486713 | orchestrator | 2026-04-09 03:48:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:45.486753 | orchestrator | 2026-04-09 03:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:48.531665 | orchestrator | 2026-04-09 03:48:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:48.533322 | orchestrator | 2026-04-09 03:48:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:48.533384 | orchestrator | 2026-04-09 03:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:51.583702 | orchestrator | 2026-04-09 03:48:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:51.588574 | orchestrator | 2026-04-09 03:48:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:51.588635 | orchestrator | 2026-04-09 03:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:54.638161 | orchestrator | 2026-04-09 03:48:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:54.639803 | orchestrator | 2026-04-09 03:48:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:54.639846 | orchestrator | 2026-04-09 03:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:48:57.684429 | orchestrator | 2026-04-09 03:48:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:48:57.685475 | orchestrator | 2026-04-09 03:48:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:48:57.685512 | orchestrator | 2026-04-09 03:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:00.735767 | orchestrator | 2026-04-09 03:49:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:00.737427 | orchestrator | 2026-04-09 03:49:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:00.737505 | orchestrator | 2026-04-09 03:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:03.789110 | orchestrator | 2026-04-09 03:49:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:03.791652 | orchestrator | 2026-04-09 03:49:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:03.791731 | orchestrator | 2026-04-09 03:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:06.836142 | orchestrator | 2026-04-09 03:49:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:06.839509 | orchestrator | 2026-04-09 03:49:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:06.839610 | orchestrator | 2026-04-09 03:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:09.889534 | orchestrator | 2026-04-09 03:49:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:09.890653 | orchestrator | 2026-04-09 03:49:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:09.891001 | orchestrator | 2026-04-09 03:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:12.937312 | orchestrator | 2026-04-09 03:49:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:12.939417 | orchestrator | 2026-04-09 03:49:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:12.939484 | orchestrator | 2026-04-09 03:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:15.988346 | orchestrator | 2026-04-09 03:49:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:15.989767 | orchestrator | 2026-04-09 03:49:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:15.989825 | orchestrator | 2026-04-09 03:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:19.046625 | orchestrator | 2026-04-09 03:49:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:19.049545 | orchestrator | 2026-04-09 03:49:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:19.049595 | orchestrator | 2026-04-09 03:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:22.091946 | orchestrator | 2026-04-09 03:49:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:22.094480 | orchestrator | 2026-04-09 03:49:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:22.094545 | orchestrator | 2026-04-09 03:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:25.142669 | orchestrator | 2026-04-09 03:49:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:25.143833 | orchestrator | 2026-04-09 03:49:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:25.143894 | orchestrator | 2026-04-09 03:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:28.185434 | orchestrator | 2026-04-09 03:49:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:28.186774 | orchestrator | 2026-04-09 03:49:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:28.186897 | orchestrator | 2026-04-09 03:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:31.242888 | orchestrator | 2026-04-09 03:49:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:31.243843 | orchestrator | 2026-04-09 03:49:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:31.243868 | orchestrator | 2026-04-09 03:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:34.295274 | orchestrator | 2026-04-09 03:49:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:34.298228 | orchestrator | 2026-04-09 03:49:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:34.298299 | orchestrator | 2026-04-09 03:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:37.347263 | orchestrator | 2026-04-09 03:49:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:37.348541 | orchestrator | 2026-04-09 03:49:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:37.348606 | orchestrator | 2026-04-09 03:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:40.395446 | orchestrator | 2026-04-09 03:49:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:40.396603 | orchestrator | 2026-04-09 03:49:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:40.396647 | orchestrator | 2026-04-09 03:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:43.446793 | orchestrator | 2026-04-09 03:49:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:43.449163 | orchestrator | 2026-04-09 03:49:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:43.449216 | orchestrator | 2026-04-09 03:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:46.502161 | orchestrator | 2026-04-09 03:49:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:46.503149 | orchestrator | 2026-04-09 03:49:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:46.503201 | orchestrator | 2026-04-09 03:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:49.551799 | orchestrator | 2026-04-09 03:49:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:49.553870 | orchestrator | 2026-04-09 03:49:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:49.553947 | orchestrator | 2026-04-09 03:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:52.598363 | orchestrator | 2026-04-09 03:49:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:52.600297 | orchestrator | 2026-04-09 03:49:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:52.600372 | orchestrator | 2026-04-09 03:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:55.647982 | orchestrator | 2026-04-09 03:49:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:55.649141 | orchestrator | 2026-04-09 03:49:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:55.649195 | orchestrator | 2026-04-09 03:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:49:58.695492 | orchestrator | 2026-04-09 03:49:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:49:58.697287 | orchestrator | 2026-04-09 03:49:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:49:58.697346 | orchestrator | 2026-04-09 03:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:01.735259 | orchestrator | 2026-04-09 03:50:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:01.737385 | orchestrator | 2026-04-09 03:50:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:01.737456 | orchestrator | 2026-04-09 03:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:04.784308 | orchestrator | 2026-04-09 03:50:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:04.785928 | orchestrator | 2026-04-09 03:50:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:04.785970 | orchestrator | 2026-04-09 03:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:07.843899 | orchestrator | 2026-04-09 03:50:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:07.846811 | orchestrator | 2026-04-09 03:50:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:07.846893 | orchestrator | 2026-04-09 03:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:10.897324 | orchestrator | 2026-04-09 03:50:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:10.898944 | orchestrator | 2026-04-09 03:50:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:10.899053 | orchestrator | 2026-04-09 03:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:13.948901 | orchestrator | 2026-04-09 03:50:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:13.952364 | orchestrator | 2026-04-09 03:50:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:13.952450 | orchestrator | 2026-04-09 03:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:17.001776 | orchestrator | 2026-04-09 03:50:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:17.003744 | orchestrator | 2026-04-09 03:50:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:17.004246 | orchestrator | 2026-04-09 03:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:20.063741 | orchestrator | 2026-04-09 03:50:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:20.065355 | orchestrator | 2026-04-09 03:50:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:20.065475 | orchestrator | 2026-04-09 03:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:23.114726 | orchestrator | 2026-04-09 03:50:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:23.116001 | orchestrator | 2026-04-09 03:50:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:23.116124 | orchestrator | 2026-04-09 03:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:26.169090 | orchestrator | 2026-04-09 03:50:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:26.170511 | orchestrator | 2026-04-09 03:50:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:26.170539 | orchestrator | 2026-04-09 03:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:29.221646 | orchestrator | 2026-04-09 03:50:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:29.224433 | orchestrator | 2026-04-09 03:50:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:29.224574 | orchestrator | 2026-04-09 03:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:32.273693 | orchestrator | 2026-04-09 03:50:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:32.275978 | orchestrator | 2026-04-09 03:50:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:32.276061 | orchestrator | 2026-04-09 03:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:35.327212 | orchestrator | 2026-04-09 03:50:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:35.329395 | orchestrator | 2026-04-09 03:50:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:35.329445 | orchestrator | 2026-04-09 03:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:38.384721 | orchestrator | 2026-04-09 03:50:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:38.386143 | orchestrator | 2026-04-09 03:50:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:38.386189 | orchestrator | 2026-04-09 03:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:41.423129 | orchestrator | 2026-04-09 03:50:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:41.424728 | orchestrator | 2026-04-09 03:50:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:41.424776 | orchestrator | 2026-04-09 03:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:44.469007 | orchestrator | 2026-04-09 03:50:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:44.471929 | orchestrator | 2026-04-09 03:50:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:44.471998 | orchestrator | 2026-04-09 03:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:47.521147 | orchestrator | 2026-04-09 03:50:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:47.521970 | orchestrator | 2026-04-09 03:50:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:47.522167 | orchestrator | 2026-04-09 03:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:50.572818 | orchestrator | 2026-04-09 03:50:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:50.574965 | orchestrator | 2026-04-09 03:50:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:50.575042 | orchestrator | 2026-04-09 03:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:53.624348 | orchestrator | 2026-04-09 03:50:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:53.626350 | orchestrator | 2026-04-09 03:50:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:53.626399 | orchestrator | 2026-04-09 03:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:56.673500 | orchestrator | 2026-04-09 03:50:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:56.675954 | orchestrator | 2026-04-09 03:50:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:56.676050 | orchestrator | 2026-04-09 03:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:50:59.724786 | orchestrator | 2026-04-09 03:50:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:50:59.726421 | orchestrator | 2026-04-09 03:50:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:50:59.726470 | orchestrator | 2026-04-09 03:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:02.764947 | orchestrator | 2026-04-09 03:51:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:02.765820 | orchestrator | 2026-04-09 03:51:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:02.765850 | orchestrator | 2026-04-09 03:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:05.810153 | orchestrator | 2026-04-09 03:51:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:05.811169 | orchestrator | 2026-04-09 03:51:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:05.811217 | orchestrator | 2026-04-09 03:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:08.865356 | orchestrator | 2026-04-09 03:51:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:08.867600 | orchestrator | 2026-04-09 03:51:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:08.867669 | orchestrator | 2026-04-09 03:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:11.909096 | orchestrator | 2026-04-09 03:51:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:11.909957 | orchestrator | 2026-04-09 03:51:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:11.909991 | orchestrator | 2026-04-09 03:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:14.954895 | orchestrator | 2026-04-09 03:51:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:14.958522 | orchestrator | 2026-04-09 03:51:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:14.958623 | orchestrator | 2026-04-09 03:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:18.017435 | orchestrator | 2026-04-09 03:51:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:18.018969 | orchestrator | 2026-04-09 03:51:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:18.019026 | orchestrator | 2026-04-09 03:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:21.076782 | orchestrator | 2026-04-09 03:51:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:21.078058 | orchestrator | 2026-04-09 03:51:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:21.078104 | orchestrator | 2026-04-09 03:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:24.126897 | orchestrator | 2026-04-09 03:51:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:24.130860 | orchestrator | 2026-04-09 03:51:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:24.130973 | orchestrator | 2026-04-09 03:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:27.184351 | orchestrator | 2026-04-09 03:51:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:27.185425 | orchestrator | 2026-04-09 03:51:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:27.185542 | orchestrator | 2026-04-09 03:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:30.230841 | orchestrator | 2026-04-09 03:51:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:30.232192 | orchestrator | 2026-04-09 03:51:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:30.232232 | orchestrator | 2026-04-09 03:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:33.283871 | orchestrator | 2026-04-09 03:51:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:33.287000 | orchestrator | 2026-04-09 03:51:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:33.287142 | orchestrator | 2026-04-09 03:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:36.336136 | orchestrator | 2026-04-09 03:51:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:36.338238 | orchestrator | 2026-04-09 03:51:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:36.338487 | orchestrator | 2026-04-09 03:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:39.387762 | orchestrator | 2026-04-09 03:51:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:39.390055 | orchestrator | 2026-04-09 03:51:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:39.390103 | orchestrator | 2026-04-09 03:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:42.438108 | orchestrator | 2026-04-09 03:51:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:42.440051 | orchestrator | 2026-04-09 03:51:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:42.440127 | orchestrator | 2026-04-09 03:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:45.486554 | orchestrator | 2026-04-09 03:51:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:45.488710 | orchestrator | 2026-04-09 03:51:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:45.488760 | orchestrator | 2026-04-09 03:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:48.537024 | orchestrator | 2026-04-09 03:51:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:48.538707 | orchestrator | 2026-04-09 03:51:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:48.538806 | orchestrator | 2026-04-09 03:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:51.584427 | orchestrator | 2026-04-09 03:51:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:51.585912 | orchestrator | 2026-04-09 03:51:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:51.585998 | orchestrator | 2026-04-09 03:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:54.632391 | orchestrator | 2026-04-09 03:51:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:54.632980 | orchestrator | 2026-04-09 03:51:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:54.633079 | orchestrator | 2026-04-09 03:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:51:57.685148 | orchestrator | 2026-04-09 03:51:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:51:57.687977 | orchestrator | 2026-04-09 03:51:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:51:57.688045 | orchestrator | 2026-04-09 03:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:00.732899 | orchestrator | 2026-04-09 03:52:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:00.735560 | orchestrator | 2026-04-09 03:52:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:00.735637 | orchestrator | 2026-04-09 03:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:03.788665 | orchestrator | 2026-04-09 03:52:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:03.789255 | orchestrator | 2026-04-09 03:52:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:03.789318 | orchestrator | 2026-04-09 03:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:06.843049 | orchestrator | 2026-04-09 03:52:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:06.844143 | orchestrator | 2026-04-09 03:52:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:06.844420 | orchestrator | 2026-04-09 03:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:09.896979 | orchestrator | 2026-04-09 03:52:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:09.898981 | orchestrator | 2026-04-09 03:52:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:09.899045 | orchestrator | 2026-04-09 03:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:12.940993 | orchestrator | 2026-04-09 03:52:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:12.944190 | orchestrator | 2026-04-09 03:52:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:12.944248 | orchestrator | 2026-04-09 03:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:15.980897 | orchestrator | 2026-04-09 03:52:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:15.982785 | orchestrator | 2026-04-09 03:52:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:15.982843 | orchestrator | 2026-04-09 03:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:19.041278 | orchestrator | 2026-04-09 03:52:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:19.043261 | orchestrator | 2026-04-09 03:52:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:19.043369 | orchestrator | 2026-04-09 03:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:22.085703 | orchestrator | 2026-04-09 03:52:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:22.088284 | orchestrator | 2026-04-09 03:52:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:22.088401 | orchestrator | 2026-04-09 03:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:25.139166 | orchestrator | 2026-04-09 03:52:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:25.141034 | orchestrator | 2026-04-09 03:52:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:25.141081 | orchestrator | 2026-04-09 03:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:28.190641 | orchestrator | 2026-04-09 03:52:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:28.192045 | orchestrator | 2026-04-09 03:52:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:28.192102 | orchestrator | 2026-04-09 03:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:31.239099 | orchestrator | 2026-04-09 03:52:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:31.240375 | orchestrator | 2026-04-09 03:52:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:31.240393 | orchestrator | 2026-04-09 03:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:34.290668 | orchestrator | 2026-04-09 03:52:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:34.292713 | orchestrator | 2026-04-09 03:52:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:34.292759 | orchestrator | 2026-04-09 03:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:37.337437 | orchestrator | 2026-04-09 03:52:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:37.338883 | orchestrator | 2026-04-09 03:52:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:37.338952 | orchestrator | 2026-04-09 03:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:40.390309 | orchestrator | 2026-04-09 03:52:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:40.391694 | orchestrator | 2026-04-09 03:52:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:40.391729 | orchestrator | 2026-04-09 03:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:43.446292 | orchestrator | 2026-04-09 03:52:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:43.447980 | orchestrator | 2026-04-09 03:52:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:43.448027 | orchestrator | 2026-04-09 03:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:46.493979 | orchestrator | 2026-04-09 03:52:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:46.496127 | orchestrator | 2026-04-09 03:52:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:46.496223 | orchestrator | 2026-04-09 03:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:49.545737 | orchestrator | 2026-04-09 03:52:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:49.547863 | orchestrator | 2026-04-09 03:52:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:49.547904 | orchestrator | 2026-04-09 03:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:52.588588 | orchestrator | 2026-04-09 03:52:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:52.590132 | orchestrator | 2026-04-09 03:52:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:52.590197 | orchestrator | 2026-04-09 03:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:55.630983 | orchestrator | 2026-04-09 03:52:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:55.631459 | orchestrator | 2026-04-09 03:52:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:55.631497 | orchestrator | 2026-04-09 03:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:52:58.687994 | orchestrator | 2026-04-09 03:52:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:52:58.689804 | orchestrator | 2026-04-09 03:52:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:52:58.689854 | orchestrator | 2026-04-09 03:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:01.736902 | orchestrator | 2026-04-09 03:53:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:01.738562 | orchestrator | 2026-04-09 03:53:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:01.738652 | orchestrator | 2026-04-09 03:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:04.785754 | orchestrator | 2026-04-09 03:53:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:04.787794 | orchestrator | 2026-04-09 03:53:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:04.787928 | orchestrator | 2026-04-09 03:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:07.841325 | orchestrator | 2026-04-09 03:53:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:07.842935 | orchestrator | 2026-04-09 03:53:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:07.842992 | orchestrator | 2026-04-09 03:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:10.883258 | orchestrator | 2026-04-09 03:53:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:10.885292 | orchestrator | 2026-04-09 03:53:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:10.885338 | orchestrator | 2026-04-09 03:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:13.932988 | orchestrator | 2026-04-09 03:53:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:13.933792 | orchestrator | 2026-04-09 03:53:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:13.933820 | orchestrator | 2026-04-09 03:53:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:16.977681 | orchestrator | 2026-04-09 03:53:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:16.979162 | orchestrator | 2026-04-09 03:53:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:16.979306 | orchestrator | 2026-04-09 03:53:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:20.024289 | orchestrator | 2026-04-09 03:53:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:20.026946 | orchestrator | 2026-04-09 03:53:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:20.027014 | orchestrator | 2026-04-09 03:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:23.070708 | orchestrator | 2026-04-09 03:53:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:23.072531 | orchestrator | 2026-04-09 03:53:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:23.072583 | orchestrator | 2026-04-09 03:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:26.115934 | orchestrator | 2026-04-09 03:53:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:26.117783 | orchestrator | 2026-04-09 03:53:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:26.117842 | orchestrator | 2026-04-09 03:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:29.161050 | orchestrator | 2026-04-09 03:53:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:29.162927 | orchestrator | 2026-04-09 03:53:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:29.162973 | orchestrator | 2026-04-09 03:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:32.197774 | orchestrator | 2026-04-09 03:53:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:32.198600 | orchestrator | 2026-04-09 03:53:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:32.198626 | orchestrator | 2026-04-09 03:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:35.252000 | orchestrator | 2026-04-09 03:53:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:35.254902 | orchestrator | 2026-04-09 03:53:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:35.254990 | orchestrator | 2026-04-09 03:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:38.304850 | orchestrator | 2026-04-09 03:53:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:38.306235 | orchestrator | 2026-04-09 03:53:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:38.306293 | orchestrator | 2026-04-09 03:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:41.350084 | orchestrator | 2026-04-09 03:53:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:41.352673 | orchestrator | 2026-04-09 03:53:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:41.352724 | orchestrator | 2026-04-09 03:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:44.397414 | orchestrator | 2026-04-09 03:53:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:44.400015 | orchestrator | 2026-04-09 03:53:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:44.400091 | orchestrator | 2026-04-09 03:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:47.439920 | orchestrator | 2026-04-09 03:53:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:47.440223 | orchestrator | 2026-04-09 03:53:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:47.440250 | orchestrator | 2026-04-09 03:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:50.481502 | orchestrator | 2026-04-09 03:53:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:50.482873 | orchestrator | 2026-04-09 03:53:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:50.483033 | orchestrator | 2026-04-09 03:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:53.534321 | orchestrator | 2026-04-09 03:53:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:53.535925 | orchestrator | 2026-04-09 03:53:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:53.535956 | orchestrator | 2026-04-09 03:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:56.578421 | orchestrator | 2026-04-09 03:53:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:56.582140 | orchestrator | 2026-04-09 03:53:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:56.582261 | orchestrator | 2026-04-09 03:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:53:59.629730 | orchestrator | 2026-04-09 03:53:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:53:59.630749 | orchestrator | 2026-04-09 03:53:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:53:59.630792 | orchestrator | 2026-04-09 03:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:02.662894 | orchestrator | 2026-04-09 03:54:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:02.664133 | orchestrator | 2026-04-09 03:54:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:02.664204 | orchestrator | 2026-04-09 03:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:05.712314 | orchestrator | 2026-04-09 03:54:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:05.713278 | orchestrator | 2026-04-09 03:54:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:05.713316 | orchestrator | 2026-04-09 03:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:08.759634 | orchestrator | 2026-04-09 03:54:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:08.762636 | orchestrator | 2026-04-09 03:54:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:08.762733 | orchestrator | 2026-04-09 03:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:11.809862 | orchestrator | 2026-04-09 03:54:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:11.811705 | orchestrator | 2026-04-09 03:54:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:11.811801 | orchestrator | 2026-04-09 03:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:14.864276 | orchestrator | 2026-04-09 03:54:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:14.866739 | orchestrator | 2026-04-09 03:54:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:14.866781 | orchestrator | 2026-04-09 03:54:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:17.905595 | orchestrator | 2026-04-09 03:54:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:17.908715 | orchestrator | 2026-04-09 03:54:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:17.908807 | orchestrator | 2026-04-09 03:54:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:20.942673 | orchestrator | 2026-04-09 03:54:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:20.944012 | orchestrator | 2026-04-09 03:54:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:20.944067 | orchestrator | 2026-04-09 03:54:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:23.988466 | orchestrator | 2026-04-09 03:54:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:23.990364 | orchestrator | 2026-04-09 03:54:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:23.990416 | orchestrator | 2026-04-09 03:54:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:27.035435 | orchestrator | 2026-04-09 03:54:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:27.037295 | orchestrator | 2026-04-09 03:54:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:27.037347 | orchestrator | 2026-04-09 03:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:30.087346 | orchestrator | 2026-04-09 03:54:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:30.089459 | orchestrator | 2026-04-09 03:54:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:30.090009 | orchestrator | 2026-04-09 03:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:33.126866 | orchestrator | 2026-04-09 03:54:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:33.128270 | orchestrator | 2026-04-09 03:54:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:33.128335 | orchestrator | 2026-04-09 03:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:36.172931 | orchestrator | 2026-04-09 03:54:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:36.173859 | orchestrator | 2026-04-09 03:54:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:36.173892 | orchestrator | 2026-04-09 03:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:39.213963 | orchestrator | 2026-04-09 03:54:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:39.214779 | orchestrator | 2026-04-09 03:54:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:39.214831 | orchestrator | 2026-04-09 03:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:42.250477 | orchestrator | 2026-04-09 03:54:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:42.252276 | orchestrator | 2026-04-09 03:54:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:42.252323 | orchestrator | 2026-04-09 03:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:45.299370 | orchestrator | 2026-04-09 03:54:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:45.300897 | orchestrator | 2026-04-09 03:54:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:45.300972 | orchestrator | 2026-04-09 03:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:48.349045 | orchestrator | 2026-04-09 03:54:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:48.350746 | orchestrator | 2026-04-09 03:54:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:48.350827 | orchestrator | 2026-04-09 03:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:51.394414 | orchestrator | 2026-04-09 03:54:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:51.396599 | orchestrator | 2026-04-09 03:54:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:51.396647 | orchestrator | 2026-04-09 03:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:54.447118 | orchestrator | 2026-04-09 03:54:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:54.448928 | orchestrator | 2026-04-09 03:54:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:54.449200 | orchestrator | 2026-04-09 03:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:54:57.491027 | orchestrator | 2026-04-09 03:54:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:54:57.495433 | orchestrator | 2026-04-09 03:54:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:54:57.495485 | orchestrator | 2026-04-09 03:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:00.539635 | orchestrator | 2026-04-09 03:55:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:00.540535 | orchestrator | 2026-04-09 03:55:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:00.540591 | orchestrator | 2026-04-09 03:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:03.585035 | orchestrator | 2026-04-09 03:55:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:03.586998 | orchestrator | 2026-04-09 03:55:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:03.587054 | orchestrator | 2026-04-09 03:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:06.639971 | orchestrator | 2026-04-09 03:55:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:06.642096 | orchestrator | 2026-04-09 03:55:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:06.642161 | orchestrator | 2026-04-09 03:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:09.690241 | orchestrator | 2026-04-09 03:55:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:09.691479 | orchestrator | 2026-04-09 03:55:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:09.691557 | orchestrator | 2026-04-09 03:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:12.740445 | orchestrator | 2026-04-09 03:55:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:12.742385 | orchestrator | 2026-04-09 03:55:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:12.742542 | orchestrator | 2026-04-09 03:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:15.792451 | orchestrator | 2026-04-09 03:55:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:15.793542 | orchestrator | 2026-04-09 03:55:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:15.793639 | orchestrator | 2026-04-09 03:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:18.841564 | orchestrator | 2026-04-09 03:55:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:18.844386 | orchestrator | 2026-04-09 03:55:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:18.844467 | orchestrator | 2026-04-09 03:55:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:21.897927 | orchestrator | 2026-04-09 03:55:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:21.900935 | orchestrator | 2026-04-09 03:55:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:21.900973 | orchestrator | 2026-04-09 03:55:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:24.953873 | orchestrator | 2026-04-09 03:55:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:24.955504 | orchestrator | 2026-04-09 03:55:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:24.955558 | orchestrator | 2026-04-09 03:55:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:28.012278 | orchestrator | 2026-04-09 03:55:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:28.026883 | orchestrator | 2026-04-09 03:55:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:28.026977 | orchestrator | 2026-04-09 03:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:31.079456 | orchestrator | 2026-04-09 03:55:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:31.080425 | orchestrator | 2026-04-09 03:55:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:31.080505 | orchestrator | 2026-04-09 03:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:34.120897 | orchestrator | 2026-04-09 03:55:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:34.122269 | orchestrator | 2026-04-09 03:55:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:34.122325 | orchestrator | 2026-04-09 03:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:37.165698 | orchestrator | 2026-04-09 03:55:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:37.166517 | orchestrator | 2026-04-09 03:55:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:37.166547 | orchestrator | 2026-04-09 03:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:40.208059 | orchestrator | 2026-04-09 03:55:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:40.211036 | orchestrator | 2026-04-09 03:55:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:40.211126 | orchestrator | 2026-04-09 03:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:43.261419 | orchestrator | 2026-04-09 03:55:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:43.264474 | orchestrator | 2026-04-09 03:55:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:43.264543 | orchestrator | 2026-04-09 03:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:46.312796 | orchestrator | 2026-04-09 03:55:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:46.315350 | orchestrator | 2026-04-09 03:55:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:46.315406 | orchestrator | 2026-04-09 03:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:49.368399 | orchestrator | 2026-04-09 03:55:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:49.370975 | orchestrator | 2026-04-09 03:55:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:49.371066 | orchestrator | 2026-04-09 03:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:52.410211 | orchestrator | 2026-04-09 03:55:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:52.412677 | orchestrator | 2026-04-09 03:55:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:52.412717 | orchestrator | 2026-04-09 03:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:55.458541 | orchestrator | 2026-04-09 03:55:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:55.460063 | orchestrator | 2026-04-09 03:55:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:55.460186 | orchestrator | 2026-04-09 03:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:55:58.505378 | orchestrator | 2026-04-09 03:55:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:55:58.507320 | orchestrator | 2026-04-09 03:55:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:55:58.507366 | orchestrator | 2026-04-09 03:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:01.555223 | orchestrator | 2026-04-09 03:56:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:01.557443 | orchestrator | 2026-04-09 03:56:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:01.557514 | orchestrator | 2026-04-09 03:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:04.604085 | orchestrator | 2026-04-09 03:56:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:04.605830 | orchestrator | 2026-04-09 03:56:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:04.605912 | orchestrator | 2026-04-09 03:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:07.657101 | orchestrator | 2026-04-09 03:56:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:07.658644 | orchestrator | 2026-04-09 03:56:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:07.658761 | orchestrator | 2026-04-09 03:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:10.707814 | orchestrator | 2026-04-09 03:56:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:10.710334 | orchestrator | 2026-04-09 03:56:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:10.710394 | orchestrator | 2026-04-09 03:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:13.761729 | orchestrator | 2026-04-09 03:56:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:13.762800 | orchestrator | 2026-04-09 03:56:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:13.762875 | orchestrator | 2026-04-09 03:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:16.813091 | orchestrator | 2026-04-09 03:56:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:16.815652 | orchestrator | 2026-04-09 03:56:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:16.815755 | orchestrator | 2026-04-09 03:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:19.866179 | orchestrator | 2026-04-09 03:56:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:19.867159 | orchestrator | 2026-04-09 03:56:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:19.867424 | orchestrator | 2026-04-09 03:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:22.910954 | orchestrator | 2026-04-09 03:56:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:22.912229 | orchestrator | 2026-04-09 03:56:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:22.912279 | orchestrator | 2026-04-09 03:56:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:25.958764 | orchestrator | 2026-04-09 03:56:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:25.959998 | orchestrator | 2026-04-09 03:56:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:25.960086 | orchestrator | 2026-04-09 03:56:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:28.996136 | orchestrator | 2026-04-09 03:56:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:28.998322 | orchestrator | 2026-04-09 03:56:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:28.998372 | orchestrator | 2026-04-09 03:56:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:32.043900 | orchestrator | 2026-04-09 03:56:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:32.045020 | orchestrator | 2026-04-09 03:56:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:32.045068 | orchestrator | 2026-04-09 03:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:35.078337 | orchestrator | 2026-04-09 03:56:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:35.078957 | orchestrator | 2026-04-09 03:56:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:35.079056 | orchestrator | 2026-04-09 03:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:38.114144 | orchestrator | 2026-04-09 03:56:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:38.115628 | orchestrator | 2026-04-09 03:56:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:38.115770 | orchestrator | 2026-04-09 03:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:41.166198 | orchestrator | 2026-04-09 03:56:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:41.168535 | orchestrator | 2026-04-09 03:56:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:41.168581 | orchestrator | 2026-04-09 03:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:44.209308 | orchestrator | 2026-04-09 03:56:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:44.211407 | orchestrator | 2026-04-09 03:56:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:44.211611 | orchestrator | 2026-04-09 03:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:47.250600 | orchestrator | 2026-04-09 03:56:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:47.253510 | orchestrator | 2026-04-09 03:56:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:47.253567 | orchestrator | 2026-04-09 03:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:50.299852 | orchestrator | 2026-04-09 03:56:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:50.302795 | orchestrator | 2026-04-09 03:56:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:50.302863 | orchestrator | 2026-04-09 03:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:53.349473 | orchestrator | 2026-04-09 03:56:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:53.351359 | orchestrator | 2026-04-09 03:56:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:53.351397 | orchestrator | 2026-04-09 03:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:56.394786 | orchestrator | 2026-04-09 03:56:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:56.395924 | orchestrator | 2026-04-09 03:56:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:56.395974 | orchestrator | 2026-04-09 03:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:56:59.446400 | orchestrator | 2026-04-09 03:56:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:56:59.448429 | orchestrator | 2026-04-09 03:56:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:56:59.448769 | orchestrator | 2026-04-09 03:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:02.502214 | orchestrator | 2026-04-09 03:57:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:02.503305 | orchestrator | 2026-04-09 03:57:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:02.503394 | orchestrator | 2026-04-09 03:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:05.551518 | orchestrator | 2026-04-09 03:57:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:05.552199 | orchestrator | 2026-04-09 03:57:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:05.552233 | orchestrator | 2026-04-09 03:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:08.595856 | orchestrator | 2026-04-09 03:57:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:08.596360 | orchestrator | 2026-04-09 03:57:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:08.596376 | orchestrator | 2026-04-09 03:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:11.635966 | orchestrator | 2026-04-09 03:57:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:11.638088 | orchestrator | 2026-04-09 03:57:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:11.638147 | orchestrator | 2026-04-09 03:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:14.678166 | orchestrator | 2026-04-09 03:57:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:14.679241 | orchestrator | 2026-04-09 03:57:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:14.679291 | orchestrator | 2026-04-09 03:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:17.729895 | orchestrator | 2026-04-09 03:57:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:17.731048 | orchestrator | 2026-04-09 03:57:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:17.731128 | orchestrator | 2026-04-09 03:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:20.769895 | orchestrator | 2026-04-09 03:57:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:20.770449 | orchestrator | 2026-04-09 03:57:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:20.770492 | orchestrator | 2026-04-09 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:23.802857 | orchestrator | 2026-04-09 03:57:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:23.803449 | orchestrator | 2026-04-09 03:57:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:23.803548 | orchestrator | 2026-04-09 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:26.839063 | orchestrator | 2026-04-09 03:57:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:26.841460 | orchestrator | 2026-04-09 03:57:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:26.841539 | orchestrator | 2026-04-09 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:29.892474 | orchestrator | 2026-04-09 03:57:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:29.894208 | orchestrator | 2026-04-09 03:57:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:29.894371 | orchestrator | 2026-04-09 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:32.942178 | orchestrator | 2026-04-09 03:57:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:32.943158 | orchestrator | 2026-04-09 03:57:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:32.943275 | orchestrator | 2026-04-09 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:35.999118 | orchestrator | 2026-04-09 03:57:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:36.001788 | orchestrator | 2026-04-09 03:57:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:36.001844 | orchestrator | 2026-04-09 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:39.054936 | orchestrator | 2026-04-09 03:57:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:39.055918 | orchestrator | 2026-04-09 03:57:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:39.055962 | orchestrator | 2026-04-09 03:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:42.097331 | orchestrator | 2026-04-09 03:57:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:42.098944 | orchestrator | 2026-04-09 03:57:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:42.099179 | orchestrator | 2026-04-09 03:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:45.147132 | orchestrator | 2026-04-09 03:57:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:45.150386 | orchestrator | 2026-04-09 03:57:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:45.150571 | orchestrator | 2026-04-09 03:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:48.204856 | orchestrator | 2026-04-09 03:57:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:48.207894 | orchestrator | 2026-04-09 03:57:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:48.208001 | orchestrator | 2026-04-09 03:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:51.263539 | orchestrator | 2026-04-09 03:57:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:51.265464 | orchestrator | 2026-04-09 03:57:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:51.265905 | orchestrator | 2026-04-09 03:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:54.319496 | orchestrator | 2026-04-09 03:57:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:54.322285 | orchestrator | 2026-04-09 03:57:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:54.322339 | orchestrator | 2026-04-09 03:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:57:57.371779 | orchestrator | 2026-04-09 03:57:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:57:57.373549 | orchestrator | 2026-04-09 03:57:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:57:57.373605 | orchestrator | 2026-04-09 03:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:00.420201 | orchestrator | 2026-04-09 03:58:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:00.424046 | orchestrator | 2026-04-09 03:58:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:00.424090 | orchestrator | 2026-04-09 03:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:03.470307 | orchestrator | 2026-04-09 03:58:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:03.472065 | orchestrator | 2026-04-09 03:58:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:03.472119 | orchestrator | 2026-04-09 03:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:06.519458 | orchestrator | 2026-04-09 03:58:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:06.521887 | orchestrator | 2026-04-09 03:58:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:06.521979 | orchestrator | 2026-04-09 03:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:09.567325 | orchestrator | 2026-04-09 03:58:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:09.569581 | orchestrator | 2026-04-09 03:58:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:09.569640 | orchestrator | 2026-04-09 03:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:12.615190 | orchestrator | 2026-04-09 03:58:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:12.618311 | orchestrator | 2026-04-09 03:58:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:12.618374 | orchestrator | 2026-04-09 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:15.665275 | orchestrator | 2026-04-09 03:58:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:15.666565 | orchestrator | 2026-04-09 03:58:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:15.666612 | orchestrator | 2026-04-09 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:18.711185 | orchestrator | 2026-04-09 03:58:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:18.712342 | orchestrator | 2026-04-09 03:58:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:18.712386 | orchestrator | 2026-04-09 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:21.766372 | orchestrator | 2026-04-09 03:58:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:21.766616 | orchestrator | 2026-04-09 03:58:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:21.767120 | orchestrator | 2026-04-09 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:24.814009 | orchestrator | 2026-04-09 03:58:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:24.819360 | orchestrator | 2026-04-09 03:58:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:24.819450 | orchestrator | 2026-04-09 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:27.875143 | orchestrator | 2026-04-09 03:58:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:27.878333 | orchestrator | 2026-04-09 03:58:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:27.878415 | orchestrator | 2026-04-09 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:30.927276 | orchestrator | 2026-04-09 03:58:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:30.930331 | orchestrator | 2026-04-09 03:58:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:30.930371 | orchestrator | 2026-04-09 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:33.977371 | orchestrator | 2026-04-09 03:58:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:33.981636 | orchestrator | 2026-04-09 03:58:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:33.981704 | orchestrator | 2026-04-09 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:37.036293 | orchestrator | 2026-04-09 03:58:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:37.037396 | orchestrator | 2026-04-09 03:58:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:37.037458 | orchestrator | 2026-04-09 03:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:40.084713 | orchestrator | 2026-04-09 03:58:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:40.086371 | orchestrator | 2026-04-09 03:58:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:40.086449 | orchestrator | 2026-04-09 03:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:43.132955 | orchestrator | 2026-04-09 03:58:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:43.133074 | orchestrator | 2026-04-09 03:58:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:43.133084 | orchestrator | 2026-04-09 03:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:46.172720 | orchestrator | 2026-04-09 03:58:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:46.174980 | orchestrator | 2026-04-09 03:58:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:46.175024 | orchestrator | 2026-04-09 03:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:49.223400 | orchestrator | 2026-04-09 03:58:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:49.226726 | orchestrator | 2026-04-09 03:58:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:49.226901 | orchestrator | 2026-04-09 03:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:52.277528 | orchestrator | 2026-04-09 03:58:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:52.281556 | orchestrator | 2026-04-09 03:58:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:52.281672 | orchestrator | 2026-04-09 03:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:55.326573 | orchestrator | 2026-04-09 03:58:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:55.327332 | orchestrator | 2026-04-09 03:58:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:55.327379 | orchestrator | 2026-04-09 03:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:58:58.372123 | orchestrator | 2026-04-09 03:58:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:58:58.375130 | orchestrator | 2026-04-09 03:58:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:58:58.375186 | orchestrator | 2026-04-09 03:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:01.416297 | orchestrator | 2026-04-09 03:59:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:01.418565 | orchestrator | 2026-04-09 03:59:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:01.418624 | orchestrator | 2026-04-09 03:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:04.466828 | orchestrator | 2026-04-09 03:59:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:04.469469 | orchestrator | 2026-04-09 03:59:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:04.469535 | orchestrator | 2026-04-09 03:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:07.521154 | orchestrator | 2026-04-09 03:59:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:07.523912 | orchestrator | 2026-04-09 03:59:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:07.524023 | orchestrator | 2026-04-09 03:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:10.564198 | orchestrator | 2026-04-09 03:59:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:10.565439 | orchestrator | 2026-04-09 03:59:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:10.565494 | orchestrator | 2026-04-09 03:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:13.613734 | orchestrator | 2026-04-09 03:59:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:13.615761 | orchestrator | 2026-04-09 03:59:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:13.615879 | orchestrator | 2026-04-09 03:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:16.661334 | orchestrator | 2026-04-09 03:59:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:16.664122 | orchestrator | 2026-04-09 03:59:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:16.664187 | orchestrator | 2026-04-09 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:19.706988 | orchestrator | 2026-04-09 03:59:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:19.709909 | orchestrator | 2026-04-09 03:59:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:19.709985 | orchestrator | 2026-04-09 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:22.757192 | orchestrator | 2026-04-09 03:59:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:22.757905 | orchestrator | 2026-04-09 03:59:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:22.757934 | orchestrator | 2026-04-09 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:25.804430 | orchestrator | 2026-04-09 03:59:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:25.807032 | orchestrator | 2026-04-09 03:59:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:25.807081 | orchestrator | 2026-04-09 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:28.849184 | orchestrator | 2026-04-09 03:59:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:28.850182 | orchestrator | 2026-04-09 03:59:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:28.850448 | orchestrator | 2026-04-09 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:31.890089 | orchestrator | 2026-04-09 03:59:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:31.891840 | orchestrator | 2026-04-09 03:59:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:31.891890 | orchestrator | 2026-04-09 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:34.942216 | orchestrator | 2026-04-09 03:59:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:34.944470 | orchestrator | 2026-04-09 03:59:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:34.944674 | orchestrator | 2026-04-09 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:37.985645 | orchestrator | 2026-04-09 03:59:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:37.987470 | orchestrator | 2026-04-09 03:59:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:37.989769 | orchestrator | 2026-04-09 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:41.035336 | orchestrator | 2026-04-09 03:59:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:41.038090 | orchestrator | 2026-04-09 03:59:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:41.038200 | orchestrator | 2026-04-09 03:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:44.087802 | orchestrator | 2026-04-09 03:59:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:44.089854 | orchestrator | 2026-04-09 03:59:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:44.089905 | orchestrator | 2026-04-09 03:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:47.140084 | orchestrator | 2026-04-09 03:59:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:47.142658 | orchestrator | 2026-04-09 03:59:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:47.142727 | orchestrator | 2026-04-09 03:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:50.190615 | orchestrator | 2026-04-09 03:59:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:50.192700 | orchestrator | 2026-04-09 03:59:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:50.192770 | orchestrator | 2026-04-09 03:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:53.242719 | orchestrator | 2026-04-09 03:59:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:53.244018 | orchestrator | 2026-04-09 03:59:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:53.244050 | orchestrator | 2026-04-09 03:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:56.295690 | orchestrator | 2026-04-09 03:59:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:56.297639 | orchestrator | 2026-04-09 03:59:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:56.297683 | orchestrator | 2026-04-09 03:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 03:59:59.339633 | orchestrator | 2026-04-09 03:59:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 03:59:59.341512 | orchestrator | 2026-04-09 03:59:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 03:59:59.341570 | orchestrator | 2026-04-09 03:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:02.387515 | orchestrator | 2026-04-09 04:00:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:02.390420 | orchestrator | 2026-04-09 04:00:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:02.390476 | orchestrator | 2026-04-09 04:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:05.436342 | orchestrator | 2026-04-09 04:00:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:05.438476 | orchestrator | 2026-04-09 04:00:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:05.438533 | orchestrator | 2026-04-09 04:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:08.485494 | orchestrator | 2026-04-09 04:00:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:08.488412 | orchestrator | 2026-04-09 04:00:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:08.488475 | orchestrator | 2026-04-09 04:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:11.539697 | orchestrator | 2026-04-09 04:00:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:11.542647 | orchestrator | 2026-04-09 04:00:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:11.542723 | orchestrator | 2026-04-09 04:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:14.595472 | orchestrator | 2026-04-09 04:00:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:14.598351 | orchestrator | 2026-04-09 04:00:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:14.598405 | orchestrator | 2026-04-09 04:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:17.651187 | orchestrator | 2026-04-09 04:00:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:17.653369 | orchestrator | 2026-04-09 04:00:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:17.653430 | orchestrator | 2026-04-09 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:20.704309 | orchestrator | 2026-04-09 04:00:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:20.704924 | orchestrator | 2026-04-09 04:00:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:20.704965 | orchestrator | 2026-04-09 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:23.762098 | orchestrator | 2026-04-09 04:00:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:23.763648 | orchestrator | 2026-04-09 04:00:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:23.763731 | orchestrator | 2026-04-09 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:26.810937 | orchestrator | 2026-04-09 04:00:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:26.812615 | orchestrator | 2026-04-09 04:00:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:26.812667 | orchestrator | 2026-04-09 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:29.880669 | orchestrator | 2026-04-09 04:00:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:29.882499 | orchestrator | 2026-04-09 04:00:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:29.882559 | orchestrator | 2026-04-09 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:32.925942 | orchestrator | 2026-04-09 04:00:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:32.926565 | orchestrator | 2026-04-09 04:00:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:32.926677 | orchestrator | 2026-04-09 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:35.970357 | orchestrator | 2026-04-09 04:00:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:35.971597 | orchestrator | 2026-04-09 04:00:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:35.971667 | orchestrator | 2026-04-09 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:39.024726 | orchestrator | 2026-04-09 04:00:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:39.028659 | orchestrator | 2026-04-09 04:00:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:39.028725 | orchestrator | 2026-04-09 04:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:42.070759 | orchestrator | 2026-04-09 04:00:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:42.072246 | orchestrator | 2026-04-09 04:00:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:42.072325 | orchestrator | 2026-04-09 04:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:45.108546 | orchestrator | 2026-04-09 04:00:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:45.109389 | orchestrator | 2026-04-09 04:00:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:45.109505 | orchestrator | 2026-04-09 04:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:48.146765 | orchestrator | 2026-04-09 04:00:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:48.149542 | orchestrator | 2026-04-09 04:00:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:48.149576 | orchestrator | 2026-04-09 04:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:51.200425 | orchestrator | 2026-04-09 04:00:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:51.203192 | orchestrator | 2026-04-09 04:00:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:51.203356 | orchestrator | 2026-04-09 04:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:54.254459 | orchestrator | 2026-04-09 04:00:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:54.256444 | orchestrator | 2026-04-09 04:00:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:54.256497 | orchestrator | 2026-04-09 04:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:00:57.310331 | orchestrator | 2026-04-09 04:00:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:00:57.311940 | orchestrator | 2026-04-09 04:00:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:00:57.311989 | orchestrator | 2026-04-09 04:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:00.354773 | orchestrator | 2026-04-09 04:01:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:00.355955 | orchestrator | 2026-04-09 04:01:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:00.355990 | orchestrator | 2026-04-09 04:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:03.405755 | orchestrator | 2026-04-09 04:01:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:03.407053 | orchestrator | 2026-04-09 04:01:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:03.407124 | orchestrator | 2026-04-09 04:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:06.454634 | orchestrator | 2026-04-09 04:01:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:06.457320 | orchestrator | 2026-04-09 04:01:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:06.457382 | orchestrator | 2026-04-09 04:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:09.502230 | orchestrator | 2026-04-09 04:01:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:09.504375 | orchestrator | 2026-04-09 04:01:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:09.504453 | orchestrator | 2026-04-09 04:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:12.551170 | orchestrator | 2026-04-09 04:01:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:12.553239 | orchestrator | 2026-04-09 04:01:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:12.553299 | orchestrator | 2026-04-09 04:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:15.601443 | orchestrator | 2026-04-09 04:01:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:15.603822 | orchestrator | 2026-04-09 04:01:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:15.603899 | orchestrator | 2026-04-09 04:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:18.654378 | orchestrator | 2026-04-09 04:01:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:18.655954 | orchestrator | 2026-04-09 04:01:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:18.656023 | orchestrator | 2026-04-09 04:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:21.706344 | orchestrator | 2026-04-09 04:01:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:21.706951 | orchestrator | 2026-04-09 04:01:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:21.706989 | orchestrator | 2026-04-09 04:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:24.754388 | orchestrator | 2026-04-09 04:01:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:24.757114 | orchestrator | 2026-04-09 04:01:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:24.757170 | orchestrator | 2026-04-09 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:27.810302 | orchestrator | 2026-04-09 04:01:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:27.813141 | orchestrator | 2026-04-09 04:01:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:27.813202 | orchestrator | 2026-04-09 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:30.867212 | orchestrator | 2026-04-09 04:01:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:30.870999 | orchestrator | 2026-04-09 04:01:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:30.871081 | orchestrator | 2026-04-09 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:33.915520 | orchestrator | 2026-04-09 04:01:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:33.919787 | orchestrator | 2026-04-09 04:01:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:33.919944 | orchestrator | 2026-04-09 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:36.960405 | orchestrator | 2026-04-09 04:01:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:36.961996 | orchestrator | 2026-04-09 04:01:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:36.962180 | orchestrator | 2026-04-09 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:40.009600 | orchestrator | 2026-04-09 04:01:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:40.011401 | orchestrator | 2026-04-09 04:01:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:40.011442 | orchestrator | 2026-04-09 04:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:43.053359 | orchestrator | 2026-04-09 04:01:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:43.054641 | orchestrator | 2026-04-09 04:01:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:43.054843 | orchestrator | 2026-04-09 04:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:46.096756 | orchestrator | 2026-04-09 04:01:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:46.098270 | orchestrator | 2026-04-09 04:01:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:46.098360 | orchestrator | 2026-04-09 04:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:49.135108 | orchestrator | 2026-04-09 04:01:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:49.136064 | orchestrator | 2026-04-09 04:01:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:49.136168 | orchestrator | 2026-04-09 04:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:52.175191 | orchestrator | 2026-04-09 04:01:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:52.176491 | orchestrator | 2026-04-09 04:01:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:52.176528 | orchestrator | 2026-04-09 04:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:55.222841 | orchestrator | 2026-04-09 04:01:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:55.225578 | orchestrator | 2026-04-09 04:01:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:55.225808 | orchestrator | 2026-04-09 04:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:01:58.271549 | orchestrator | 2026-04-09 04:01:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:01:58.273644 | orchestrator | 2026-04-09 04:01:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:01:58.273702 | orchestrator | 2026-04-09 04:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:01.316445 | orchestrator | 2026-04-09 04:02:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:01.320945 | orchestrator | 2026-04-09 04:02:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:01.321029 | orchestrator | 2026-04-09 04:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:04.365587 | orchestrator | 2026-04-09 04:02:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:04.366963 | orchestrator | 2026-04-09 04:02:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:04.367028 | orchestrator | 2026-04-09 04:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:07.411979 | orchestrator | 2026-04-09 04:02:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:07.413263 | orchestrator | 2026-04-09 04:02:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:07.413321 | orchestrator | 2026-04-09 04:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:10.459491 | orchestrator | 2026-04-09 04:02:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:10.461329 | orchestrator | 2026-04-09 04:02:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:10.461458 | orchestrator | 2026-04-09 04:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:13.516298 | orchestrator | 2026-04-09 04:02:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:13.521147 | orchestrator | 2026-04-09 04:02:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:13.521205 | orchestrator | 2026-04-09 04:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:16.567935 | orchestrator | 2026-04-09 04:02:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:16.570743 | orchestrator | 2026-04-09 04:02:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:16.570852 | orchestrator | 2026-04-09 04:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:19.623419 | orchestrator | 2026-04-09 04:02:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:19.624795 | orchestrator | 2026-04-09 04:02:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:19.624863 | orchestrator | 2026-04-09 04:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:22.664985 | orchestrator | 2026-04-09 04:02:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:22.666592 | orchestrator | 2026-04-09 04:02:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:22.666647 | orchestrator | 2026-04-09 04:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:25.708370 | orchestrator | 2026-04-09 04:02:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:25.709787 | orchestrator | 2026-04-09 04:02:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:25.709858 | orchestrator | 2026-04-09 04:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:28.755525 | orchestrator | 2026-04-09 04:02:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:28.756592 | orchestrator | 2026-04-09 04:02:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:28.756641 | orchestrator | 2026-04-09 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:31.811799 | orchestrator | 2026-04-09 04:02:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:31.814396 | orchestrator | 2026-04-09 04:02:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:31.814478 | orchestrator | 2026-04-09 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:34.860550 | orchestrator | 2026-04-09 04:02:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:34.862454 | orchestrator | 2026-04-09 04:02:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:34.862514 | orchestrator | 2026-04-09 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:37.905856 | orchestrator | 2026-04-09 04:02:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:37.907413 | orchestrator | 2026-04-09 04:02:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:37.907520 | orchestrator | 2026-04-09 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:40.959053 | orchestrator | 2026-04-09 04:02:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:40.962632 | orchestrator | 2026-04-09 04:02:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:40.962698 | orchestrator | 2026-04-09 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:44.020461 | orchestrator | 2026-04-09 04:02:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:44.021674 | orchestrator | 2026-04-09 04:02:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:44.021732 | orchestrator | 2026-04-09 04:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:47.065624 | orchestrator | 2026-04-09 04:02:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:47.066661 | orchestrator | 2026-04-09 04:02:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:47.066711 | orchestrator | 2026-04-09 04:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:50.110755 | orchestrator | 2026-04-09 04:02:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:50.114498 | orchestrator | 2026-04-09 04:02:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:50.114562 | orchestrator | 2026-04-09 04:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:53.158776 | orchestrator | 2026-04-09 04:02:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:53.159436 | orchestrator | 2026-04-09 04:02:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:53.159613 | orchestrator | 2026-04-09 04:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:56.198444 | orchestrator | 2026-04-09 04:02:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:56.199462 | orchestrator | 2026-04-09 04:02:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:56.199559 | orchestrator | 2026-04-09 04:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:02:59.247465 | orchestrator | 2026-04-09 04:02:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:02:59.248319 | orchestrator | 2026-04-09 04:02:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:02:59.248367 | orchestrator | 2026-04-09 04:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:02.292691 | orchestrator | 2026-04-09 04:03:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:02.293352 | orchestrator | 2026-04-09 04:03:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:02.293622 | orchestrator | 2026-04-09 04:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:05.339020 | orchestrator | 2026-04-09 04:03:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:05.339509 | orchestrator | 2026-04-09 04:03:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:05.339542 | orchestrator | 2026-04-09 04:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:08.388459 | orchestrator | 2026-04-09 04:03:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:08.389358 | orchestrator | 2026-04-09 04:03:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:08.389403 | orchestrator | 2026-04-09 04:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:11.442368 | orchestrator | 2026-04-09 04:03:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:11.443825 | orchestrator | 2026-04-09 04:03:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:11.443892 | orchestrator | 2026-04-09 04:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:14.491007 | orchestrator | 2026-04-09 04:03:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:14.491887 | orchestrator | 2026-04-09 04:03:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:14.491968 | orchestrator | 2026-04-09 04:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:17.531734 | orchestrator | 2026-04-09 04:03:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:17.533663 | orchestrator | 2026-04-09 04:03:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:17.533750 | orchestrator | 2026-04-09 04:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:20.577050 | orchestrator | 2026-04-09 04:03:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:20.577906 | orchestrator | 2026-04-09 04:03:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:20.577949 | orchestrator | 2026-04-09 04:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:23.640962 | orchestrator | 2026-04-09 04:03:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:23.643230 | orchestrator | 2026-04-09 04:03:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:23.643279 | orchestrator | 2026-04-09 04:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:26.696215 | orchestrator | 2026-04-09 04:03:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:26.699014 | orchestrator | 2026-04-09 04:03:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:26.699072 | orchestrator | 2026-04-09 04:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:29.746241 | orchestrator | 2026-04-09 04:03:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:29.748010 | orchestrator | 2026-04-09 04:03:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:29.748106 | orchestrator | 2026-04-09 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:32.803797 | orchestrator | 2026-04-09 04:03:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:32.805528 | orchestrator | 2026-04-09 04:03:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:32.805580 | orchestrator | 2026-04-09 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:35.853942 | orchestrator | 2026-04-09 04:03:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:35.855429 | orchestrator | 2026-04-09 04:03:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:35.855483 | orchestrator | 2026-04-09 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:38.901091 | orchestrator | 2026-04-09 04:03:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:38.902659 | orchestrator | 2026-04-09 04:03:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:38.902724 | orchestrator | 2026-04-09 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:41.941703 | orchestrator | 2026-04-09 04:03:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:41.943712 | orchestrator | 2026-04-09 04:03:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:41.943764 | orchestrator | 2026-04-09 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:44.983653 | orchestrator | 2026-04-09 04:03:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:44.983772 | orchestrator | 2026-04-09 04:03:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:44.983783 | orchestrator | 2026-04-09 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:48.031294 | orchestrator | 2026-04-09 04:03:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:48.032012 | orchestrator | 2026-04-09 04:03:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:48.032080 | orchestrator | 2026-04-09 04:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:51.081478 | orchestrator | 2026-04-09 04:03:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:51.083373 | orchestrator | 2026-04-09 04:03:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:51.083415 | orchestrator | 2026-04-09 04:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:54.123693 | orchestrator | 2026-04-09 04:03:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:54.125063 | orchestrator | 2026-04-09 04:03:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:54.125098 | orchestrator | 2026-04-09 04:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:03:57.166128 | orchestrator | 2026-04-09 04:03:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:03:57.167419 | orchestrator | 2026-04-09 04:03:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:03:57.167472 | orchestrator | 2026-04-09 04:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:00.212254 | orchestrator | 2026-04-09 04:04:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:00.212779 | orchestrator | 2026-04-09 04:04:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:00.212803 | orchestrator | 2026-04-09 04:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:03.247724 | orchestrator | 2026-04-09 04:04:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:03.250315 | orchestrator | 2026-04-09 04:04:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:03.250409 | orchestrator | 2026-04-09 04:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:06.288432 | orchestrator | 2026-04-09 04:04:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:06.289556 | orchestrator | 2026-04-09 04:04:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:06.289810 | orchestrator | 2026-04-09 04:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:09.334846 | orchestrator | 2026-04-09 04:04:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:09.335526 | orchestrator | 2026-04-09 04:04:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:09.335748 | orchestrator | 2026-04-09 04:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:12.378809 | orchestrator | 2026-04-09 04:04:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:12.380478 | orchestrator | 2026-04-09 04:04:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:12.380543 | orchestrator | 2026-04-09 04:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:15.424056 | orchestrator | 2026-04-09 04:04:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:15.425470 | orchestrator | 2026-04-09 04:04:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:15.425495 | orchestrator | 2026-04-09 04:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:18.464256 | orchestrator | 2026-04-09 04:04:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:18.466471 | orchestrator | 2026-04-09 04:04:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:18.466589 | orchestrator | 2026-04-09 04:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:21.514485 | orchestrator | 2026-04-09 04:04:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:21.516194 | orchestrator | 2026-04-09 04:04:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:21.516238 | orchestrator | 2026-04-09 04:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:24.565842 | orchestrator | 2026-04-09 04:04:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:24.567696 | orchestrator | 2026-04-09 04:04:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:24.567769 | orchestrator | 2026-04-09 04:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:27.613424 | orchestrator | 2026-04-09 04:04:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:27.614604 | orchestrator | 2026-04-09 04:04:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:27.614919 | orchestrator | 2026-04-09 04:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:30.664980 | orchestrator | 2026-04-09 04:04:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:30.667505 | orchestrator | 2026-04-09 04:04:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:30.667561 | orchestrator | 2026-04-09 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:33.709685 | orchestrator | 2026-04-09 04:04:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:33.712481 | orchestrator | 2026-04-09 04:04:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:33.712958 | orchestrator | 2026-04-09 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:36.768598 | orchestrator | 2026-04-09 04:04:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:36.770705 | orchestrator | 2026-04-09 04:04:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:36.770759 | orchestrator | 2026-04-09 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:39.821900 | orchestrator | 2026-04-09 04:04:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:39.824073 | orchestrator | 2026-04-09 04:04:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:39.824122 | orchestrator | 2026-04-09 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:42.869626 | orchestrator | 2026-04-09 04:04:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:42.872227 | orchestrator | 2026-04-09 04:04:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:42.872287 | orchestrator | 2026-04-09 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:45.921406 | orchestrator | 2026-04-09 04:04:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:45.921834 | orchestrator | 2026-04-09 04:04:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:45.921869 | orchestrator | 2026-04-09 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:48.971152 | orchestrator | 2026-04-09 04:04:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:48.973082 | orchestrator | 2026-04-09 04:04:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:48.973148 | orchestrator | 2026-04-09 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:52.022062 | orchestrator | 2026-04-09 04:04:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:52.023195 | orchestrator | 2026-04-09 04:04:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:52.023234 | orchestrator | 2026-04-09 04:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:55.065503 | orchestrator | 2026-04-09 04:04:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:55.067398 | orchestrator | 2026-04-09 04:04:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:55.067449 | orchestrator | 2026-04-09 04:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:04:58.106374 | orchestrator | 2026-04-09 04:04:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:04:58.107710 | orchestrator | 2026-04-09 04:04:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:04:58.107791 | orchestrator | 2026-04-09 04:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:01.151184 | orchestrator | 2026-04-09 04:05:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:01.152721 | orchestrator | 2026-04-09 04:05:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:01.152824 | orchestrator | 2026-04-09 04:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:04.197330 | orchestrator | 2026-04-09 04:05:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:04.199277 | orchestrator | 2026-04-09 04:05:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:04.199329 | orchestrator | 2026-04-09 04:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:07.241012 | orchestrator | 2026-04-09 04:05:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:07.242997 | orchestrator | 2026-04-09 04:05:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:07.243036 | orchestrator | 2026-04-09 04:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:10.285127 | orchestrator | 2026-04-09 04:05:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:10.286816 | orchestrator | 2026-04-09 04:05:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:10.286851 | orchestrator | 2026-04-09 04:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:13.337949 | orchestrator | 2026-04-09 04:05:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:13.338837 | orchestrator | 2026-04-09 04:05:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:13.338878 | orchestrator | 2026-04-09 04:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:16.385608 | orchestrator | 2026-04-09 04:05:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:16.389562 | orchestrator | 2026-04-09 04:05:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:16.389654 | orchestrator | 2026-04-09 04:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:19.437890 | orchestrator | 2026-04-09 04:05:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:19.440916 | orchestrator | 2026-04-09 04:05:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:19.441039 | orchestrator | 2026-04-09 04:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:22.493206 | orchestrator | 2026-04-09 04:05:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:22.494214 | orchestrator | 2026-04-09 04:05:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:22.494244 | orchestrator | 2026-04-09 04:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:25.541051 | orchestrator | 2026-04-09 04:05:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:25.542222 | orchestrator | 2026-04-09 04:05:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:25.542277 | orchestrator | 2026-04-09 04:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:28.587779 | orchestrator | 2026-04-09 04:05:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:28.589853 | orchestrator | 2026-04-09 04:05:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:28.589911 | orchestrator | 2026-04-09 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:31.629995 | orchestrator | 2026-04-09 04:05:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:31.630974 | orchestrator | 2026-04-09 04:05:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:31.631025 | orchestrator | 2026-04-09 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:34.682582 | orchestrator | 2026-04-09 04:05:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:34.683271 | orchestrator | 2026-04-09 04:05:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:34.683314 | orchestrator | 2026-04-09 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:37.732208 | orchestrator | 2026-04-09 04:05:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:37.733704 | orchestrator | 2026-04-09 04:05:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:37.733739 | orchestrator | 2026-04-09 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:40.780188 | orchestrator | 2026-04-09 04:05:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:40.780556 | orchestrator | 2026-04-09 04:05:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:40.780733 | orchestrator | 2026-04-09 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:43.831605 | orchestrator | 2026-04-09 04:05:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:43.832157 | orchestrator | 2026-04-09 04:05:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:43.832361 | orchestrator | 2026-04-09 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:46.885523 | orchestrator | 2026-04-09 04:05:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:46.888234 | orchestrator | 2026-04-09 04:05:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:46.888338 | orchestrator | 2026-04-09 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:49.936896 | orchestrator | 2026-04-09 04:05:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:49.939228 | orchestrator | 2026-04-09 04:05:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:49.939775 | orchestrator | 2026-04-09 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:52.991286 | orchestrator | 2026-04-09 04:05:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:52.991398 | orchestrator | 2026-04-09 04:05:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:52.991415 | orchestrator | 2026-04-09 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:56.042111 | orchestrator | 2026-04-09 04:05:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:56.043193 | orchestrator | 2026-04-09 04:05:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:56.043251 | orchestrator | 2026-04-09 04:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:05:59.086720 | orchestrator | 2026-04-09 04:05:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:05:59.086966 | orchestrator | 2026-04-09 04:05:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:05:59.086994 | orchestrator | 2026-04-09 04:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:02.132497 | orchestrator | 2026-04-09 04:06:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:02.134947 | orchestrator | 2026-04-09 04:06:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:02.135519 | orchestrator | 2026-04-09 04:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:05.188312 | orchestrator | 2026-04-09 04:06:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:05.189392 | orchestrator | 2026-04-09 04:06:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:05.189483 | orchestrator | 2026-04-09 04:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:08.234096 | orchestrator | 2026-04-09 04:06:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:08.235553 | orchestrator | 2026-04-09 04:06:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:08.235594 | orchestrator | 2026-04-09 04:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:11.275014 | orchestrator | 2026-04-09 04:06:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:11.276514 | orchestrator | 2026-04-09 04:06:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:11.276556 | orchestrator | 2026-04-09 04:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:14.321966 | orchestrator | 2026-04-09 04:06:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:14.323898 | orchestrator | 2026-04-09 04:06:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:14.323941 | orchestrator | 2026-04-09 04:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:17.367713 | orchestrator | 2026-04-09 04:06:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:17.369515 | orchestrator | 2026-04-09 04:06:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:17.369584 | orchestrator | 2026-04-09 04:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:20.406004 | orchestrator | 2026-04-09 04:06:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:20.407585 | orchestrator | 2026-04-09 04:06:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:20.407857 | orchestrator | 2026-04-09 04:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:23.464101 | orchestrator | 2026-04-09 04:06:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:23.465719 | orchestrator | 2026-04-09 04:06:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:23.465851 | orchestrator | 2026-04-09 04:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:26.509915 | orchestrator | 2026-04-09 04:06:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:26.511200 | orchestrator | 2026-04-09 04:06:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:26.511236 | orchestrator | 2026-04-09 04:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:29.563803 | orchestrator | 2026-04-09 04:06:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:29.565774 | orchestrator | 2026-04-09 04:06:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:29.565817 | orchestrator | 2026-04-09 04:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:32.606973 | orchestrator | 2026-04-09 04:06:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:32.608057 | orchestrator | 2026-04-09 04:06:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:32.608098 | orchestrator | 2026-04-09 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:35.658087 | orchestrator | 2026-04-09 04:06:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:35.660016 | orchestrator | 2026-04-09 04:06:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:35.660100 | orchestrator | 2026-04-09 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:38.710565 | orchestrator | 2026-04-09 04:06:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:38.712065 | orchestrator | 2026-04-09 04:06:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:38.712088 | orchestrator | 2026-04-09 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:06:41.754961 | orchestrator | 2026-04-09 04:06:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:06:41.756858 | orchestrator | 2026-04-09 04:06:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:06:41.756918 | orchestrator | 2026-04-09 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:08:44.911050 | orchestrator | 2026-04-09 04:08:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:08:44.911199 | orchestrator | 2026-04-09 04:08:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:08:44.911213 | orchestrator | 2026-04-09 04:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:08:47.958832 | orchestrator | 2026-04-09 04:08:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:08:47.960338 | orchestrator | 2026-04-09 04:08:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:08:47.960380 | orchestrator | 2026-04-09 04:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:08:51.004776 | orchestrator | 2026-04-09 04:08:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:08:51.006183 | orchestrator | 2026-04-09 04:08:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:08:51.006957 | orchestrator | 2026-04-09 04:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:08:54.052860 | orchestrator | 2026-04-09 04:08:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:08:54.055219 | orchestrator | 2026-04-09 04:08:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:08:54.055266 | orchestrator | 2026-04-09 04:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:08:57.097771 | orchestrator | 2026-04-09 04:08:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:08:57.099883 | orchestrator | 2026-04-09 04:08:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:08:57.099970 | orchestrator | 2026-04-09 04:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:00.149056 | orchestrator | 2026-04-09 04:09:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:00.153379 | orchestrator | 2026-04-09 04:09:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:00.153464 | orchestrator | 2026-04-09 04:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:03.196745 | orchestrator | 2026-04-09 04:09:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:03.197919 | orchestrator | 2026-04-09 04:09:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:03.197998 | orchestrator | 2026-04-09 04:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:06.257197 | orchestrator | 2026-04-09 04:09:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:06.257897 | orchestrator | 2026-04-09 04:09:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:06.257994 | orchestrator | 2026-04-09 04:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:09.312364 | orchestrator | 2026-04-09 04:09:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:09.315702 | orchestrator | 2026-04-09 04:09:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:09.315754 | orchestrator | 2026-04-09 04:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:12.350420 | orchestrator | 2026-04-09 04:09:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:12.352517 | orchestrator | 2026-04-09 04:09:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:12.352576 | orchestrator | 2026-04-09 04:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:15.391055 | orchestrator | 2026-04-09 04:09:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:15.392465 | orchestrator | 2026-04-09 04:09:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:15.392498 | orchestrator | 2026-04-09 04:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:18.439771 | orchestrator | 2026-04-09 04:09:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:18.441663 | orchestrator | 2026-04-09 04:09:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:18.441772 | orchestrator | 2026-04-09 04:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:21.489897 | orchestrator | 2026-04-09 04:09:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:21.491808 | orchestrator | 2026-04-09 04:09:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:21.491828 | orchestrator | 2026-04-09 04:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:24.543504 | orchestrator | 2026-04-09 04:09:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:24.545274 | orchestrator | 2026-04-09 04:09:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:24.545463 | orchestrator | 2026-04-09 04:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:27.590679 | orchestrator | 2026-04-09 04:09:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:27.591886 | orchestrator | 2026-04-09 04:09:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:27.591920 | orchestrator | 2026-04-09 04:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:30.637905 | orchestrator | 2026-04-09 04:09:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:30.639516 | orchestrator | 2026-04-09 04:09:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:30.639565 | orchestrator | 2026-04-09 04:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:33.689818 | orchestrator | 2026-04-09 04:09:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:33.691811 | orchestrator | 2026-04-09 04:09:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:33.691848 | orchestrator | 2026-04-09 04:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:36.740504 | orchestrator | 2026-04-09 04:09:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:36.742596 | orchestrator | 2026-04-09 04:09:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:36.742640 | orchestrator | 2026-04-09 04:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:39.783161 | orchestrator | 2026-04-09 04:09:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:39.784750 | orchestrator | 2026-04-09 04:09:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:39.784799 | orchestrator | 2026-04-09 04:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:42.833176 | orchestrator | 2026-04-09 04:09:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:42.837238 | orchestrator | 2026-04-09 04:09:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:42.837301 | orchestrator | 2026-04-09 04:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:45.879452 | orchestrator | 2026-04-09 04:09:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:45.880880 | orchestrator | 2026-04-09 04:09:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:45.880912 | orchestrator | 2026-04-09 04:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:48.921675 | orchestrator | 2026-04-09 04:09:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:48.922218 | orchestrator | 2026-04-09 04:09:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:48.922256 | orchestrator | 2026-04-09 04:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:51.972735 | orchestrator | 2026-04-09 04:09:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:51.974182 | orchestrator | 2026-04-09 04:09:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:51.974225 | orchestrator | 2026-04-09 04:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:55.028454 | orchestrator | 2026-04-09 04:09:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:55.031166 | orchestrator | 2026-04-09 04:09:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:55.031291 | orchestrator | 2026-04-09 04:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:09:58.074576 | orchestrator | 2026-04-09 04:09:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:09:58.076236 | orchestrator | 2026-04-09 04:09:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:09:58.076275 | orchestrator | 2026-04-09 04:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:01.117546 | orchestrator | 2026-04-09 04:10:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:01.119177 | orchestrator | 2026-04-09 04:10:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:01.119240 | orchestrator | 2026-04-09 04:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:04.164820 | orchestrator | 2026-04-09 04:10:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:04.166626 | orchestrator | 2026-04-09 04:10:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:04.166658 | orchestrator | 2026-04-09 04:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:07.215743 | orchestrator | 2026-04-09 04:10:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:07.218211 | orchestrator | 2026-04-09 04:10:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:07.218251 | orchestrator | 2026-04-09 04:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:10.266322 | orchestrator | 2026-04-09 04:10:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:10.268412 | orchestrator | 2026-04-09 04:10:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:10.268468 | orchestrator | 2026-04-09 04:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:13.317101 | orchestrator | 2026-04-09 04:10:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:13.318901 | orchestrator | 2026-04-09 04:10:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:13.318950 | orchestrator | 2026-04-09 04:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:16.366658 | orchestrator | 2026-04-09 04:10:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:16.368527 | orchestrator | 2026-04-09 04:10:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:16.368641 | orchestrator | 2026-04-09 04:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:19.420300 | orchestrator | 2026-04-09 04:10:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:19.422169 | orchestrator | 2026-04-09 04:10:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:19.422202 | orchestrator | 2026-04-09 04:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:22.466614 | orchestrator | 2026-04-09 04:10:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:22.468060 | orchestrator | 2026-04-09 04:10:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:22.468097 | orchestrator | 2026-04-09 04:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:25.509448 | orchestrator | 2026-04-09 04:10:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:25.510968 | orchestrator | 2026-04-09 04:10:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:25.511037 | orchestrator | 2026-04-09 04:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:28.560886 | orchestrator | 2026-04-09 04:10:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:28.563031 | orchestrator | 2026-04-09 04:10:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:28.563139 | orchestrator | 2026-04-09 04:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:31.610940 | orchestrator | 2026-04-09 04:10:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:31.612388 | orchestrator | 2026-04-09 04:10:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:31.612456 | orchestrator | 2026-04-09 04:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:34.653679 | orchestrator | 2026-04-09 04:10:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:34.655957 | orchestrator | 2026-04-09 04:10:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:34.656157 | orchestrator | 2026-04-09 04:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:37.708530 | orchestrator | 2026-04-09 04:10:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:37.710105 | orchestrator | 2026-04-09 04:10:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:37.710149 | orchestrator | 2026-04-09 04:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:40.768267 | orchestrator | 2026-04-09 04:10:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:40.769660 | orchestrator | 2026-04-09 04:10:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:40.769963 | orchestrator | 2026-04-09 04:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:43.818592 | orchestrator | 2026-04-09 04:10:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:43.820760 | orchestrator | 2026-04-09 04:10:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:43.820881 | orchestrator | 2026-04-09 04:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:46.874298 | orchestrator | 2026-04-09 04:10:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:46.875516 | orchestrator | 2026-04-09 04:10:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:46.875557 | orchestrator | 2026-04-09 04:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:49.925139 | orchestrator | 2026-04-09 04:10:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:49.927032 | orchestrator | 2026-04-09 04:10:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:49.927077 | orchestrator | 2026-04-09 04:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:52.979291 | orchestrator | 2026-04-09 04:10:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:52.981375 | orchestrator | 2026-04-09 04:10:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:52.981425 | orchestrator | 2026-04-09 04:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:56.035318 | orchestrator | 2026-04-09 04:10:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:56.036555 | orchestrator | 2026-04-09 04:10:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:56.036603 | orchestrator | 2026-04-09 04:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:10:59.077215 | orchestrator | 2026-04-09 04:10:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:10:59.077914 | orchestrator | 2026-04-09 04:10:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:10:59.077983 | orchestrator | 2026-04-09 04:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:02.123314 | orchestrator | 2026-04-09 04:11:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:02.124849 | orchestrator | 2026-04-09 04:11:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:02.124926 | orchestrator | 2026-04-09 04:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:05.175266 | orchestrator | 2026-04-09 04:11:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:05.176187 | orchestrator | 2026-04-09 04:11:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:05.176222 | orchestrator | 2026-04-09 04:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:08.220995 | orchestrator | 2026-04-09 04:11:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:08.221168 | orchestrator | 2026-04-09 04:11:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:08.221192 | orchestrator | 2026-04-09 04:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:11.268113 | orchestrator | 2026-04-09 04:11:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:11.269458 | orchestrator | 2026-04-09 04:11:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:11.269494 | orchestrator | 2026-04-09 04:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:14.320179 | orchestrator | 2026-04-09 04:11:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:14.321343 | orchestrator | 2026-04-09 04:11:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:14.321381 | orchestrator | 2026-04-09 04:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:17.373829 | orchestrator | 2026-04-09 04:11:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:17.376121 | orchestrator | 2026-04-09 04:11:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:17.376150 | orchestrator | 2026-04-09 04:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:20.432982 | orchestrator | 2026-04-09 04:11:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:20.433538 | orchestrator | 2026-04-09 04:11:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:20.433825 | orchestrator | 2026-04-09 04:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:23.482197 | orchestrator | 2026-04-09 04:11:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:23.484203 | orchestrator | 2026-04-09 04:11:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:23.484285 | orchestrator | 2026-04-09 04:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:26.531481 | orchestrator | 2026-04-09 04:11:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:26.532550 | orchestrator | 2026-04-09 04:11:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:26.532608 | orchestrator | 2026-04-09 04:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:29.589824 | orchestrator | 2026-04-09 04:11:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:29.592026 | orchestrator | 2026-04-09 04:11:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:29.592167 | orchestrator | 2026-04-09 04:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:32.641927 | orchestrator | 2026-04-09 04:11:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:32.643373 | orchestrator | 2026-04-09 04:11:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:32.643562 | orchestrator | 2026-04-09 04:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:35.686901 | orchestrator | 2026-04-09 04:11:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:35.687586 | orchestrator | 2026-04-09 04:11:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:35.687671 | orchestrator | 2026-04-09 04:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:38.732159 | orchestrator | 2026-04-09 04:11:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:38.734319 | orchestrator | 2026-04-09 04:11:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:38.734378 | orchestrator | 2026-04-09 04:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:41.779265 | orchestrator | 2026-04-09 04:11:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:41.781125 | orchestrator | 2026-04-09 04:11:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:41.781149 | orchestrator | 2026-04-09 04:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:44.817564 | orchestrator | 2026-04-09 04:11:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:44.818902 | orchestrator | 2026-04-09 04:11:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:44.818963 | orchestrator | 2026-04-09 04:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:47.864355 | orchestrator | 2026-04-09 04:11:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:47.867059 | orchestrator | 2026-04-09 04:11:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:47.867138 | orchestrator | 2026-04-09 04:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:50.912435 | orchestrator | 2026-04-09 04:11:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:50.915155 | orchestrator | 2026-04-09 04:11:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:50.915213 | orchestrator | 2026-04-09 04:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:53.964140 | orchestrator | 2026-04-09 04:11:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:53.965015 | orchestrator | 2026-04-09 04:11:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:53.965133 | orchestrator | 2026-04-09 04:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:11:57.015972 | orchestrator | 2026-04-09 04:11:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:11:57.017842 | orchestrator | 2026-04-09 04:11:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:11:57.017896 | orchestrator | 2026-04-09 04:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:00.056836 | orchestrator | 2026-04-09 04:12:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:00.058119 | orchestrator | 2026-04-09 04:12:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:00.058191 | orchestrator | 2026-04-09 04:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:03.125300 | orchestrator | 2026-04-09 04:12:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:03.125405 | orchestrator | 2026-04-09 04:12:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:03.125443 | orchestrator | 2026-04-09 04:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:06.162990 | orchestrator | 2026-04-09 04:12:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:06.165348 | orchestrator | 2026-04-09 04:12:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:06.165482 | orchestrator | 2026-04-09 04:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:09.209443 | orchestrator | 2026-04-09 04:12:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:09.210462 | orchestrator | 2026-04-09 04:12:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:09.210539 | orchestrator | 2026-04-09 04:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:12.268948 | orchestrator | 2026-04-09 04:12:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:12.271912 | orchestrator | 2026-04-09 04:12:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:12.272980 | orchestrator | 2026-04-09 04:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:15.318193 | orchestrator | 2026-04-09 04:12:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:15.321018 | orchestrator | 2026-04-09 04:12:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:15.321076 | orchestrator | 2026-04-09 04:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:18.370264 | orchestrator | 2026-04-09 04:12:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:18.371750 | orchestrator | 2026-04-09 04:12:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:18.371784 | orchestrator | 2026-04-09 04:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:21.414648 | orchestrator | 2026-04-09 04:12:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:21.416230 | orchestrator | 2026-04-09 04:12:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:21.416270 | orchestrator | 2026-04-09 04:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:24.459946 | orchestrator | 2026-04-09 04:12:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:24.461700 | orchestrator | 2026-04-09 04:12:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:24.461758 | orchestrator | 2026-04-09 04:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:27.501696 | orchestrator | 2026-04-09 04:12:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:27.504524 | orchestrator | 2026-04-09 04:12:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:27.504570 | orchestrator | 2026-04-09 04:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:30.552961 | orchestrator | 2026-04-09 04:12:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:30.553924 | orchestrator | 2026-04-09 04:12:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:30.554089 | orchestrator | 2026-04-09 04:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:33.601223 | orchestrator | 2026-04-09 04:12:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:33.601953 | orchestrator | 2026-04-09 04:12:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:33.602073 | orchestrator | 2026-04-09 04:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:36.647280 | orchestrator | 2026-04-09 04:12:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:36.649212 | orchestrator | 2026-04-09 04:12:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:36.649427 | orchestrator | 2026-04-09 04:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:39.693747 | orchestrator | 2026-04-09 04:12:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:39.695777 | orchestrator | 2026-04-09 04:12:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:39.695850 | orchestrator | 2026-04-09 04:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:42.751698 | orchestrator | 2026-04-09 04:12:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:42.753016 | orchestrator | 2026-04-09 04:12:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:42.753289 | orchestrator | 2026-04-09 04:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:45.802484 | orchestrator | 2026-04-09 04:12:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:45.804434 | orchestrator | 2026-04-09 04:12:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:45.804506 | orchestrator | 2026-04-09 04:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:48.847131 | orchestrator | 2026-04-09 04:12:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:48.848188 | orchestrator | 2026-04-09 04:12:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:48.848228 | orchestrator | 2026-04-09 04:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:51.890594 | orchestrator | 2026-04-09 04:12:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:51.892921 | orchestrator | 2026-04-09 04:12:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:51.892976 | orchestrator | 2026-04-09 04:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:54.940904 | orchestrator | 2026-04-09 04:12:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:54.942712 | orchestrator | 2026-04-09 04:12:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:54.942750 | orchestrator | 2026-04-09 04:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:12:57.991787 | orchestrator | 2026-04-09 04:12:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:12:57.993805 | orchestrator | 2026-04-09 04:12:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:12:57.993845 | orchestrator | 2026-04-09 04:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:01.039153 | orchestrator | 2026-04-09 04:13:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:01.041023 | orchestrator | 2026-04-09 04:13:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:01.041085 | orchestrator | 2026-04-09 04:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:04.086460 | orchestrator | 2026-04-09 04:13:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:04.088178 | orchestrator | 2026-04-09 04:13:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:04.088328 | orchestrator | 2026-04-09 04:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:07.135325 | orchestrator | 2026-04-09 04:13:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:07.136187 | orchestrator | 2026-04-09 04:13:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:07.136210 | orchestrator | 2026-04-09 04:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:10.177303 | orchestrator | 2026-04-09 04:13:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:10.179753 | orchestrator | 2026-04-09 04:13:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:10.179899 | orchestrator | 2026-04-09 04:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:13.229971 | orchestrator | 2026-04-09 04:13:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:13.231581 | orchestrator | 2026-04-09 04:13:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:13.231647 | orchestrator | 2026-04-09 04:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:16.278579 | orchestrator | 2026-04-09 04:13:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:16.280581 | orchestrator | 2026-04-09 04:13:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:16.280627 | orchestrator | 2026-04-09 04:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:19.329768 | orchestrator | 2026-04-09 04:13:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:19.330834 | orchestrator | 2026-04-09 04:13:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:19.331046 | orchestrator | 2026-04-09 04:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:22.382740 | orchestrator | 2026-04-09 04:13:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:22.384776 | orchestrator | 2026-04-09 04:13:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:22.384823 | orchestrator | 2026-04-09 04:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:25.435025 | orchestrator | 2026-04-09 04:13:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:25.436910 | orchestrator | 2026-04-09 04:13:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:25.436972 | orchestrator | 2026-04-09 04:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:28.490859 | orchestrator | 2026-04-09 04:13:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:28.492930 | orchestrator | 2026-04-09 04:13:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:28.493075 | orchestrator | 2026-04-09 04:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:31.541215 | orchestrator | 2026-04-09 04:13:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:31.543612 | orchestrator | 2026-04-09 04:13:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:31.543692 | orchestrator | 2026-04-09 04:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:34.585155 | orchestrator | 2026-04-09 04:13:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:34.585917 | orchestrator | 2026-04-09 04:13:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:34.585955 | orchestrator | 2026-04-09 04:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:37.635866 | orchestrator | 2026-04-09 04:13:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:37.637745 | orchestrator | 2026-04-09 04:13:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:37.638429 | orchestrator | 2026-04-09 04:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:40.690517 | orchestrator | 2026-04-09 04:13:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:40.692823 | orchestrator | 2026-04-09 04:13:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:40.692885 | orchestrator | 2026-04-09 04:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:43.742910 | orchestrator | 2026-04-09 04:13:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:43.743343 | orchestrator | 2026-04-09 04:13:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:43.743537 | orchestrator | 2026-04-09 04:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:46.798894 | orchestrator | 2026-04-09 04:13:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:46.800599 | orchestrator | 2026-04-09 04:13:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:46.800697 | orchestrator | 2026-04-09 04:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:49.854908 | orchestrator | 2026-04-09 04:13:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:49.856293 | orchestrator | 2026-04-09 04:13:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:49.856334 | orchestrator | 2026-04-09 04:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:52.909194 | orchestrator | 2026-04-09 04:13:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:52.911699 | orchestrator | 2026-04-09 04:13:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:52.911768 | orchestrator | 2026-04-09 04:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:55.965121 | orchestrator | 2026-04-09 04:13:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:55.967605 | orchestrator | 2026-04-09 04:13:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:55.967649 | orchestrator | 2026-04-09 04:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:13:59.027840 | orchestrator | 2026-04-09 04:13:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:13:59.027939 | orchestrator | 2026-04-09 04:13:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:13:59.027979 | orchestrator | 2026-04-09 04:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:02.074946 | orchestrator | 2026-04-09 04:14:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:02.077504 | orchestrator | 2026-04-09 04:14:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:02.077594 | orchestrator | 2026-04-09 04:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:05.121722 | orchestrator | 2026-04-09 04:14:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:05.123688 | orchestrator | 2026-04-09 04:14:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:05.123733 | orchestrator | 2026-04-09 04:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:08.170251 | orchestrator | 2026-04-09 04:14:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:08.171961 | orchestrator | 2026-04-09 04:14:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:08.172116 | orchestrator | 2026-04-09 04:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:11.217552 | orchestrator | 2026-04-09 04:14:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:11.218205 | orchestrator | 2026-04-09 04:14:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:11.218242 | orchestrator | 2026-04-09 04:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:14.266738 | orchestrator | 2026-04-09 04:14:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:14.267239 | orchestrator | 2026-04-09 04:14:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:14.267271 | orchestrator | 2026-04-09 04:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:17.312850 | orchestrator | 2026-04-09 04:14:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:17.313975 | orchestrator | 2026-04-09 04:14:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:17.314106 | orchestrator | 2026-04-09 04:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:20.354444 | orchestrator | 2026-04-09 04:14:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:20.356098 | orchestrator | 2026-04-09 04:14:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:20.356145 | orchestrator | 2026-04-09 04:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:23.401134 | orchestrator | 2026-04-09 04:14:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:23.402267 | orchestrator | 2026-04-09 04:14:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:23.402324 | orchestrator | 2026-04-09 04:14:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:26.451972 | orchestrator | 2026-04-09 04:14:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:26.454512 | orchestrator | 2026-04-09 04:14:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:26.454559 | orchestrator | 2026-04-09 04:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:29.497997 | orchestrator | 2026-04-09 04:14:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:29.500596 | orchestrator | 2026-04-09 04:14:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:29.500660 | orchestrator | 2026-04-09 04:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:32.541941 | orchestrator | 2026-04-09 04:14:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:32.547035 | orchestrator | 2026-04-09 04:14:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:32.547101 | orchestrator | 2026-04-09 04:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:35.590455 | orchestrator | 2026-04-09 04:14:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:35.591995 | orchestrator | 2026-04-09 04:14:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:35.592050 | orchestrator | 2026-04-09 04:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:38.631082 | orchestrator | 2026-04-09 04:14:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:38.634442 | orchestrator | 2026-04-09 04:14:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:38.634521 | orchestrator | 2026-04-09 04:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:41.681919 | orchestrator | 2026-04-09 04:14:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:41.682913 | orchestrator | 2026-04-09 04:14:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:41.682955 | orchestrator | 2026-04-09 04:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:44.734279 | orchestrator | 2026-04-09 04:14:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:44.737459 | orchestrator | 2026-04-09 04:14:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:44.737516 | orchestrator | 2026-04-09 04:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:47.790834 | orchestrator | 2026-04-09 04:14:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:47.791135 | orchestrator | 2026-04-09 04:14:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:47.791163 | orchestrator | 2026-04-09 04:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:50.843292 | orchestrator | 2026-04-09 04:14:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:50.844471 | orchestrator | 2026-04-09 04:14:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:50.844511 | orchestrator | 2026-04-09 04:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:53.893921 | orchestrator | 2026-04-09 04:14:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:53.896756 | orchestrator | 2026-04-09 04:14:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:53.896805 | orchestrator | 2026-04-09 04:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:56.943346 | orchestrator | 2026-04-09 04:14:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:56.944829 | orchestrator | 2026-04-09 04:14:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:56.944930 | orchestrator | 2026-04-09 04:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:14:59.995761 | orchestrator | 2026-04-09 04:14:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:14:59.998167 | orchestrator | 2026-04-09 04:14:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:14:59.998248 | orchestrator | 2026-04-09 04:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:03.049951 | orchestrator | 2026-04-09 04:15:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:03.051622 | orchestrator | 2026-04-09 04:15:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:03.051654 | orchestrator | 2026-04-09 04:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:06.106324 | orchestrator | 2026-04-09 04:15:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:06.109535 | orchestrator | 2026-04-09 04:15:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:06.109614 | orchestrator | 2026-04-09 04:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:09.156024 | orchestrator | 2026-04-09 04:15:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:09.158518 | orchestrator | 2026-04-09 04:15:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:09.158594 | orchestrator | 2026-04-09 04:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:12.210937 | orchestrator | 2026-04-09 04:15:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:12.211616 | orchestrator | 2026-04-09 04:15:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:12.211652 | orchestrator | 2026-04-09 04:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:15.253640 | orchestrator | 2026-04-09 04:15:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:15.256146 | orchestrator | 2026-04-09 04:15:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:15.256196 | orchestrator | 2026-04-09 04:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:18.314391 | orchestrator | 2026-04-09 04:15:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:18.316584 | orchestrator | 2026-04-09 04:15:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:18.316677 | orchestrator | 2026-04-09 04:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:21.360691 | orchestrator | 2026-04-09 04:15:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:21.363310 | orchestrator | 2026-04-09 04:15:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:21.363366 | orchestrator | 2026-04-09 04:15:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:24.413073 | orchestrator | 2026-04-09 04:15:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:24.416196 | orchestrator | 2026-04-09 04:15:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:24.416326 | orchestrator | 2026-04-09 04:15:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:27.464501 | orchestrator | 2026-04-09 04:15:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:27.467522 | orchestrator | 2026-04-09 04:15:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:27.467653 | orchestrator | 2026-04-09 04:15:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:30.513844 | orchestrator | 2026-04-09 04:15:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:30.518085 | orchestrator | 2026-04-09 04:15:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:30.518160 | orchestrator | 2026-04-09 04:15:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:33.566584 | orchestrator | 2026-04-09 04:15:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:33.569351 | orchestrator | 2026-04-09 04:15:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:33.569386 | orchestrator | 2026-04-09 04:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:36.611567 | orchestrator | 2026-04-09 04:15:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:36.612186 | orchestrator | 2026-04-09 04:15:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:36.612271 | orchestrator | 2026-04-09 04:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:39.658855 | orchestrator | 2026-04-09 04:15:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:39.659378 | orchestrator | 2026-04-09 04:15:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:39.659407 | orchestrator | 2026-04-09 04:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:42.708490 | orchestrator | 2026-04-09 04:15:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:42.710750 | orchestrator | 2026-04-09 04:15:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:42.710793 | orchestrator | 2026-04-09 04:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:45.763460 | orchestrator | 2026-04-09 04:15:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:45.766314 | orchestrator | 2026-04-09 04:15:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:45.766349 | orchestrator | 2026-04-09 04:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:48.817781 | orchestrator | 2026-04-09 04:15:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:48.817895 | orchestrator | 2026-04-09 04:15:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:48.817921 | orchestrator | 2026-04-09 04:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:51.859358 | orchestrator | 2026-04-09 04:15:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:51.860057 | orchestrator | 2026-04-09 04:15:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:51.860156 | orchestrator | 2026-04-09 04:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:54.903409 | orchestrator | 2026-04-09 04:15:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:54.904546 | orchestrator | 2026-04-09 04:15:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:54.904585 | orchestrator | 2026-04-09 04:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:15:57.952576 | orchestrator | 2026-04-09 04:15:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:15:57.953609 | orchestrator | 2026-04-09 04:15:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:15:57.953642 | orchestrator | 2026-04-09 04:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:01.005718 | orchestrator | 2026-04-09 04:16:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:01.007907 | orchestrator | 2026-04-09 04:16:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:01.007968 | orchestrator | 2026-04-09 04:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:04.053704 | orchestrator | 2026-04-09 04:16:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:04.054683 | orchestrator | 2026-04-09 04:16:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:04.054717 | orchestrator | 2026-04-09 04:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:07.101314 | orchestrator | 2026-04-09 04:16:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:07.102515 | orchestrator | 2026-04-09 04:16:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:07.102592 | orchestrator | 2026-04-09 04:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:10.138725 | orchestrator | 2026-04-09 04:16:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:10.140097 | orchestrator | 2026-04-09 04:16:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:10.140297 | orchestrator | 2026-04-09 04:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:13.176951 | orchestrator | 2026-04-09 04:16:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:13.178278 | orchestrator | 2026-04-09 04:16:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:13.178319 | orchestrator | 2026-04-09 04:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:16.216571 | orchestrator | 2026-04-09 04:16:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:16.218283 | orchestrator | 2026-04-09 04:16:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:16.218419 | orchestrator | 2026-04-09 04:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:19.249124 | orchestrator | 2026-04-09 04:16:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:19.249443 | orchestrator | 2026-04-09 04:16:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:19.249472 | orchestrator | 2026-04-09 04:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:22.295550 | orchestrator | 2026-04-09 04:16:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:22.296050 | orchestrator | 2026-04-09 04:16:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:22.296207 | orchestrator | 2026-04-09 04:16:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:25.341379 | orchestrator | 2026-04-09 04:16:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:25.344312 | orchestrator | 2026-04-09 04:16:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:25.344373 | orchestrator | 2026-04-09 04:16:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:28.393503 | orchestrator | 2026-04-09 04:16:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:28.396570 | orchestrator | 2026-04-09 04:16:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:28.396645 | orchestrator | 2026-04-09 04:16:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:31.437461 | orchestrator | 2026-04-09 04:16:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:31.438127 | orchestrator | 2026-04-09 04:16:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:31.438237 | orchestrator | 2026-04-09 04:16:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:34.472968 | orchestrator | 2026-04-09 04:16:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:34.474102 | orchestrator | 2026-04-09 04:16:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:34.474168 | orchestrator | 2026-04-09 04:16:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:37.509901 | orchestrator | 2026-04-09 04:16:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:37.510712 | orchestrator | 2026-04-09 04:16:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:37.510915 | orchestrator | 2026-04-09 04:16:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:40.548079 | orchestrator | 2026-04-09 04:16:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:40.548852 | orchestrator | 2026-04-09 04:16:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:40.548888 | orchestrator | 2026-04-09 04:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:43.585068 | orchestrator | 2026-04-09 04:16:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:43.587731 | orchestrator | 2026-04-09 04:16:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:43.587797 | orchestrator | 2026-04-09 04:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:46.636784 | orchestrator | 2026-04-09 04:16:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:46.638779 | orchestrator | 2026-04-09 04:16:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:46.638815 | orchestrator | 2026-04-09 04:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:49.683090 | orchestrator | 2026-04-09 04:16:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:49.684845 | orchestrator | 2026-04-09 04:16:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:49.684870 | orchestrator | 2026-04-09 04:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:52.727258 | orchestrator | 2026-04-09 04:16:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:52.730093 | orchestrator | 2026-04-09 04:16:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:52.730157 | orchestrator | 2026-04-09 04:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:55.773795 | orchestrator | 2026-04-09 04:16:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:55.775666 | orchestrator | 2026-04-09 04:16:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:55.775732 | orchestrator | 2026-04-09 04:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:16:58.824202 | orchestrator | 2026-04-09 04:16:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:16:58.825839 | orchestrator | 2026-04-09 04:16:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:16:58.825890 | orchestrator | 2026-04-09 04:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:01.876191 | orchestrator | 2026-04-09 04:17:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:01.877887 | orchestrator | 2026-04-09 04:17:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:01.877959 | orchestrator | 2026-04-09 04:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:04.925246 | orchestrator | 2026-04-09 04:17:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:04.927934 | orchestrator | 2026-04-09 04:17:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:04.927994 | orchestrator | 2026-04-09 04:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:07.977041 | orchestrator | 2026-04-09 04:17:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:07.979313 | orchestrator | 2026-04-09 04:17:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:07.979434 | orchestrator | 2026-04-09 04:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:11.028228 | orchestrator | 2026-04-09 04:17:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:11.029648 | orchestrator | 2026-04-09 04:17:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:11.029670 | orchestrator | 2026-04-09 04:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:14.069330 | orchestrator | 2026-04-09 04:17:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:14.069598 | orchestrator | 2026-04-09 04:17:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:14.069627 | orchestrator | 2026-04-09 04:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:17.115655 | orchestrator | 2026-04-09 04:17:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:17.118750 | orchestrator | 2026-04-09 04:17:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:17.118895 | orchestrator | 2026-04-09 04:17:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:20.155287 | orchestrator | 2026-04-09 04:17:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:20.156396 | orchestrator | 2026-04-09 04:17:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:20.156431 | orchestrator | 2026-04-09 04:17:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:23.205284 | orchestrator | 2026-04-09 04:17:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:23.207356 | orchestrator | 2026-04-09 04:17:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:23.207432 | orchestrator | 2026-04-09 04:17:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:26.254493 | orchestrator | 2026-04-09 04:17:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:26.256409 | orchestrator | 2026-04-09 04:17:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:26.256456 | orchestrator | 2026-04-09 04:17:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:29.305594 | orchestrator | 2026-04-09 04:17:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:29.307524 | orchestrator | 2026-04-09 04:17:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:29.307653 | orchestrator | 2026-04-09 04:17:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:32.356547 | orchestrator | 2026-04-09 04:17:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:32.357980 | orchestrator | 2026-04-09 04:17:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:32.358168 | orchestrator | 2026-04-09 04:17:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:35.411110 | orchestrator | 2026-04-09 04:17:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:35.412093 | orchestrator | 2026-04-09 04:17:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:35.412184 | orchestrator | 2026-04-09 04:17:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:38.462884 | orchestrator | 2026-04-09 04:17:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:38.464375 | orchestrator | 2026-04-09 04:17:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:38.464454 | orchestrator | 2026-04-09 04:17:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:41.517642 | orchestrator | 2026-04-09 04:17:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:41.519456 | orchestrator | 2026-04-09 04:17:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:41.519507 | orchestrator | 2026-04-09 04:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:44.569073 | orchestrator | 2026-04-09 04:17:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:44.571492 | orchestrator | 2026-04-09 04:17:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:44.571575 | orchestrator | 2026-04-09 04:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:47.620605 | orchestrator | 2026-04-09 04:17:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:47.622414 | orchestrator | 2026-04-09 04:17:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:47.622452 | orchestrator | 2026-04-09 04:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:50.670275 | orchestrator | 2026-04-09 04:17:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:50.671919 | orchestrator | 2026-04-09 04:17:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:50.671994 | orchestrator | 2026-04-09 04:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:53.711749 | orchestrator | 2026-04-09 04:17:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:53.713517 | orchestrator | 2026-04-09 04:17:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:53.713561 | orchestrator | 2026-04-09 04:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:56.763855 | orchestrator | 2026-04-09 04:17:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:56.765539 | orchestrator | 2026-04-09 04:17:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:56.765618 | orchestrator | 2026-04-09 04:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:17:59.822286 | orchestrator | 2026-04-09 04:17:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:17:59.823548 | orchestrator | 2026-04-09 04:17:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:17:59.823605 | orchestrator | 2026-04-09 04:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:02.862738 | orchestrator | 2026-04-09 04:18:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:02.864088 | orchestrator | 2026-04-09 04:18:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:02.864143 | orchestrator | 2026-04-09 04:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:05.908847 | orchestrator | 2026-04-09 04:18:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:05.910857 | orchestrator | 2026-04-09 04:18:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:05.910919 | orchestrator | 2026-04-09 04:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:08.957508 | orchestrator | 2026-04-09 04:18:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:08.958725 | orchestrator | 2026-04-09 04:18:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:08.958782 | orchestrator | 2026-04-09 04:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:12.009146 | orchestrator | 2026-04-09 04:18:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:12.011834 | orchestrator | 2026-04-09 04:18:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:12.011884 | orchestrator | 2026-04-09 04:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:15.052549 | orchestrator | 2026-04-09 04:18:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:15.053634 | orchestrator | 2026-04-09 04:18:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:15.053703 | orchestrator | 2026-04-09 04:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:18.099726 | orchestrator | 2026-04-09 04:18:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:18.100847 | orchestrator | 2026-04-09 04:18:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:18.100937 | orchestrator | 2026-04-09 04:18:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:21.143186 | orchestrator | 2026-04-09 04:18:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:21.143983 | orchestrator | 2026-04-09 04:18:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:21.144076 | orchestrator | 2026-04-09 04:18:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:24.187420 | orchestrator | 2026-04-09 04:18:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:24.188924 | orchestrator | 2026-04-09 04:18:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:24.188978 | orchestrator | 2026-04-09 04:18:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:27.236939 | orchestrator | 2026-04-09 04:18:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:27.238128 | orchestrator | 2026-04-09 04:18:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:27.238173 | orchestrator | 2026-04-09 04:18:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:30.295303 | orchestrator | 2026-04-09 04:18:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:30.298634 | orchestrator | 2026-04-09 04:18:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:30.298697 | orchestrator | 2026-04-09 04:18:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:33.348419 | orchestrator | 2026-04-09 04:18:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:33.350528 | orchestrator | 2026-04-09 04:18:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:33.350569 | orchestrator | 2026-04-09 04:18:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:36.396387 | orchestrator | 2026-04-09 04:18:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:36.399299 | orchestrator | 2026-04-09 04:18:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:36.399355 | orchestrator | 2026-04-09 04:18:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:39.449623 | orchestrator | 2026-04-09 04:18:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:39.451950 | orchestrator | 2026-04-09 04:18:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:39.452008 | orchestrator | 2026-04-09 04:18:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:42.495886 | orchestrator | 2026-04-09 04:18:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:42.498541 | orchestrator | 2026-04-09 04:18:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:42.498575 | orchestrator | 2026-04-09 04:18:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:45.542253 | orchestrator | 2026-04-09 04:18:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:45.542961 | orchestrator | 2026-04-09 04:18:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:45.543004 | orchestrator | 2026-04-09 04:18:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:48.601517 | orchestrator | 2026-04-09 04:18:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:48.602836 | orchestrator | 2026-04-09 04:18:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:48.603073 | orchestrator | 2026-04-09 04:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:51.650070 | orchestrator | 2026-04-09 04:18:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:51.651894 | orchestrator | 2026-04-09 04:18:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:51.652294 | orchestrator | 2026-04-09 04:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:54.698808 | orchestrator | 2026-04-09 04:18:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:54.699734 | orchestrator | 2026-04-09 04:18:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:54.699777 | orchestrator | 2026-04-09 04:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:18:57.752328 | orchestrator | 2026-04-09 04:18:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:18:57.754153 | orchestrator | 2026-04-09 04:18:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:18:57.754231 | orchestrator | 2026-04-09 04:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:00.805741 | orchestrator | 2026-04-09 04:19:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:00.809306 | orchestrator | 2026-04-09 04:19:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:00.809354 | orchestrator | 2026-04-09 04:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:03.851620 | orchestrator | 2026-04-09 04:19:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:03.853934 | orchestrator | 2026-04-09 04:19:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:03.853979 | orchestrator | 2026-04-09 04:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:06.905124 | orchestrator | 2026-04-09 04:19:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:06.907497 | orchestrator | 2026-04-09 04:19:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:06.907633 | orchestrator | 2026-04-09 04:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:09.959733 | orchestrator | 2026-04-09 04:19:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:09.963416 | orchestrator | 2026-04-09 04:19:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:09.963507 | orchestrator | 2026-04-09 04:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:13.020303 | orchestrator | 2026-04-09 04:19:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:13.022937 | orchestrator | 2026-04-09 04:19:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:13.023049 | orchestrator | 2026-04-09 04:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:16.076050 | orchestrator | 2026-04-09 04:19:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:16.077018 | orchestrator | 2026-04-09 04:19:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:16.077107 | orchestrator | 2026-04-09 04:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:19.118399 | orchestrator | 2026-04-09 04:19:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:19.119863 | orchestrator | 2026-04-09 04:19:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:19.119902 | orchestrator | 2026-04-09 04:19:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:22.166885 | orchestrator | 2026-04-09 04:19:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:22.168555 | orchestrator | 2026-04-09 04:19:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:22.168601 | orchestrator | 2026-04-09 04:19:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:25.219183 | orchestrator | 2026-04-09 04:19:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:25.220392 | orchestrator | 2026-04-09 04:19:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:25.220430 | orchestrator | 2026-04-09 04:19:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:28.269457 | orchestrator | 2026-04-09 04:19:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:28.270986 | orchestrator | 2026-04-09 04:19:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:28.271039 | orchestrator | 2026-04-09 04:19:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:31.325368 | orchestrator | 2026-04-09 04:19:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:31.327096 | orchestrator | 2026-04-09 04:19:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:31.327150 | orchestrator | 2026-04-09 04:19:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:34.379951 | orchestrator | 2026-04-09 04:19:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:34.381471 | orchestrator | 2026-04-09 04:19:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:34.381552 | orchestrator | 2026-04-09 04:19:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:37.425683 | orchestrator | 2026-04-09 04:19:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:37.426613 | orchestrator | 2026-04-09 04:19:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:37.426640 | orchestrator | 2026-04-09 04:19:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:40.476454 | orchestrator | 2026-04-09 04:19:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:40.477460 | orchestrator | 2026-04-09 04:19:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:40.477499 | orchestrator | 2026-04-09 04:19:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:43.522546 | orchestrator | 2026-04-09 04:19:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:43.536398 | orchestrator | 2026-04-09 04:19:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:43.536514 | orchestrator | 2026-04-09 04:19:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:46.566847 | orchestrator | 2026-04-09 04:19:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:46.567793 | orchestrator | 2026-04-09 04:19:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:46.567832 | orchestrator | 2026-04-09 04:19:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:49.610543 | orchestrator | 2026-04-09 04:19:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:49.613122 | orchestrator | 2026-04-09 04:19:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:49.613280 | orchestrator | 2026-04-09 04:19:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:52.670523 | orchestrator | 2026-04-09 04:19:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:52.672431 | orchestrator | 2026-04-09 04:19:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:52.673006 | orchestrator | 2026-04-09 04:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:55.723826 | orchestrator | 2026-04-09 04:19:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:55.726515 | orchestrator | 2026-04-09 04:19:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:55.726624 | orchestrator | 2026-04-09 04:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:19:58.776073 | orchestrator | 2026-04-09 04:19:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:19:58.777503 | orchestrator | 2026-04-09 04:19:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:19:58.777565 | orchestrator | 2026-04-09 04:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:01.828057 | orchestrator | 2026-04-09 04:20:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:01.830478 | orchestrator | 2026-04-09 04:20:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:01.830515 | orchestrator | 2026-04-09 04:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:04.881035 | orchestrator | 2026-04-09 04:20:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:04.881575 | orchestrator | 2026-04-09 04:20:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:04.881589 | orchestrator | 2026-04-09 04:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:07.930869 | orchestrator | 2026-04-09 04:20:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:07.932728 | orchestrator | 2026-04-09 04:20:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:07.932761 | orchestrator | 2026-04-09 04:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:10.980376 | orchestrator | 2026-04-09 04:20:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:10.980945 | orchestrator | 2026-04-09 04:20:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:10.981240 | orchestrator | 2026-04-09 04:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:14.034464 | orchestrator | 2026-04-09 04:20:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:14.036323 | orchestrator | 2026-04-09 04:20:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:14.036545 | orchestrator | 2026-04-09 04:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:17.081682 | orchestrator | 2026-04-09 04:20:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:17.082119 | orchestrator | 2026-04-09 04:20:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:17.082146 | orchestrator | 2026-04-09 04:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:20.126704 | orchestrator | 2026-04-09 04:20:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:20.127616 | orchestrator | 2026-04-09 04:20:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:20.127713 | orchestrator | 2026-04-09 04:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:23.184339 | orchestrator | 2026-04-09 04:20:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:23.187200 | orchestrator | 2026-04-09 04:20:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:23.187268 | orchestrator | 2026-04-09 04:20:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:26.229626 | orchestrator | 2026-04-09 04:20:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:26.230531 | orchestrator | 2026-04-09 04:20:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:26.230571 | orchestrator | 2026-04-09 04:20:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:29.285231 | orchestrator | 2026-04-09 04:20:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:29.286682 | orchestrator | 2026-04-09 04:20:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:29.286776 | orchestrator | 2026-04-09 04:20:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:32.337377 | orchestrator | 2026-04-09 04:20:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:32.339640 | orchestrator | 2026-04-09 04:20:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:32.340076 | orchestrator | 2026-04-09 04:20:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:35.385618 | orchestrator | 2026-04-09 04:20:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:35.386779 | orchestrator | 2026-04-09 04:20:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:35.386816 | orchestrator | 2026-04-09 04:20:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:38.440539 | orchestrator | 2026-04-09 04:20:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:38.442083 | orchestrator | 2026-04-09 04:20:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:38.442149 | orchestrator | 2026-04-09 04:20:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:41.489372 | orchestrator | 2026-04-09 04:20:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:41.490485 | orchestrator | 2026-04-09 04:20:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:41.490681 | orchestrator | 2026-04-09 04:20:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:44.540679 | orchestrator | 2026-04-09 04:20:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:44.542781 | orchestrator | 2026-04-09 04:20:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:44.542824 | orchestrator | 2026-04-09 04:20:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:47.594451 | orchestrator | 2026-04-09 04:20:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:47.595837 | orchestrator | 2026-04-09 04:20:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:47.595925 | orchestrator | 2026-04-09 04:20:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:50.637698 | orchestrator | 2026-04-09 04:20:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:50.638245 | orchestrator | 2026-04-09 04:20:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:50.638304 | orchestrator | 2026-04-09 04:20:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:53.685829 | orchestrator | 2026-04-09 04:20:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:53.686555 | orchestrator | 2026-04-09 04:20:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:53.686603 | orchestrator | 2026-04-09 04:20:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:56.729192 | orchestrator | 2026-04-09 04:20:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:56.731317 | orchestrator | 2026-04-09 04:20:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:56.731389 | orchestrator | 2026-04-09 04:20:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:20:59.777228 | orchestrator | 2026-04-09 04:20:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:20:59.780250 | orchestrator | 2026-04-09 04:20:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:20:59.780313 | orchestrator | 2026-04-09 04:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:02.829596 | orchestrator | 2026-04-09 04:21:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:02.830767 | orchestrator | 2026-04-09 04:21:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:02.830803 | orchestrator | 2026-04-09 04:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:05.889277 | orchestrator | 2026-04-09 04:21:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:05.891139 | orchestrator | 2026-04-09 04:21:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:05.891186 | orchestrator | 2026-04-09 04:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:08.939435 | orchestrator | 2026-04-09 04:21:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:08.942068 | orchestrator | 2026-04-09 04:21:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:08.942112 | orchestrator | 2026-04-09 04:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:11.982804 | orchestrator | 2026-04-09 04:21:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:11.985576 | orchestrator | 2026-04-09 04:21:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:11.985630 | orchestrator | 2026-04-09 04:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:15.032112 | orchestrator | 2026-04-09 04:21:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:15.032622 | orchestrator | 2026-04-09 04:21:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:15.032671 | orchestrator | 2026-04-09 04:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:18.075321 | orchestrator | 2026-04-09 04:21:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:18.076554 | orchestrator | 2026-04-09 04:21:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:18.076588 | orchestrator | 2026-04-09 04:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:21.118919 | orchestrator | 2026-04-09 04:21:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:21.121127 | orchestrator | 2026-04-09 04:21:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:21.121204 | orchestrator | 2026-04-09 04:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:24.163479 | orchestrator | 2026-04-09 04:21:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:24.164552 | orchestrator | 2026-04-09 04:21:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:24.164622 | orchestrator | 2026-04-09 04:21:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:27.208961 | orchestrator | 2026-04-09 04:21:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:27.210144 | orchestrator | 2026-04-09 04:21:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:27.210179 | orchestrator | 2026-04-09 04:21:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:30.252545 | orchestrator | 2026-04-09 04:21:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:30.254310 | orchestrator | 2026-04-09 04:21:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:30.254350 | orchestrator | 2026-04-09 04:21:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:33.299900 | orchestrator | 2026-04-09 04:21:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:33.302080 | orchestrator | 2026-04-09 04:21:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:33.302142 | orchestrator | 2026-04-09 04:21:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:36.353197 | orchestrator | 2026-04-09 04:21:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:36.355738 | orchestrator | 2026-04-09 04:21:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:36.355778 | orchestrator | 2026-04-09 04:21:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:39.403966 | orchestrator | 2026-04-09 04:21:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:39.405055 | orchestrator | 2026-04-09 04:21:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:39.405117 | orchestrator | 2026-04-09 04:21:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:42.458118 | orchestrator | 2026-04-09 04:21:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:42.460986 | orchestrator | 2026-04-09 04:21:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:42.461163 | orchestrator | 2026-04-09 04:21:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:45.500128 | orchestrator | 2026-04-09 04:21:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:45.501216 | orchestrator | 2026-04-09 04:21:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:45.501248 | orchestrator | 2026-04-09 04:21:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:48.555307 | orchestrator | 2026-04-09 04:21:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:48.557606 | orchestrator | 2026-04-09 04:21:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:48.557638 | orchestrator | 2026-04-09 04:21:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:51.597131 | orchestrator | 2026-04-09 04:21:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:51.597658 | orchestrator | 2026-04-09 04:21:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:51.597714 | orchestrator | 2026-04-09 04:21:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:54.646627 | orchestrator | 2026-04-09 04:21:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:54.649334 | orchestrator | 2026-04-09 04:21:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:54.649386 | orchestrator | 2026-04-09 04:21:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:21:57.699493 | orchestrator | 2026-04-09 04:21:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:21:57.701538 | orchestrator | 2026-04-09 04:21:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:21:57.701607 | orchestrator | 2026-04-09 04:21:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:00.752266 | orchestrator | 2026-04-09 04:22:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:00.754149 | orchestrator | 2026-04-09 04:22:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:00.754212 | orchestrator | 2026-04-09 04:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:03.802389 | orchestrator | 2026-04-09 04:22:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:03.803053 | orchestrator | 2026-04-09 04:22:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:03.803082 | orchestrator | 2026-04-09 04:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:06.856160 | orchestrator | 2026-04-09 04:22:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:06.858073 | orchestrator | 2026-04-09 04:22:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:06.858106 | orchestrator | 2026-04-09 04:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:09.899892 | orchestrator | 2026-04-09 04:22:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:09.900118 | orchestrator | 2026-04-09 04:22:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:09.900153 | orchestrator | 2026-04-09 04:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:12.951913 | orchestrator | 2026-04-09 04:22:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:12.954203 | orchestrator | 2026-04-09 04:22:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:12.954258 | orchestrator | 2026-04-09 04:22:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:16.007182 | orchestrator | 2026-04-09 04:22:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:16.008740 | orchestrator | 2026-04-09 04:22:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:16.008929 | orchestrator | 2026-04-09 04:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:19.059081 | orchestrator | 2026-04-09 04:22:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:19.061868 | orchestrator | 2026-04-09 04:22:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:19.061949 | orchestrator | 2026-04-09 04:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:22.102957 | orchestrator | 2026-04-09 04:22:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:22.104889 | orchestrator | 2026-04-09 04:22:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:22.104945 | orchestrator | 2026-04-09 04:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:25.156667 | orchestrator | 2026-04-09 04:22:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:25.158837 | orchestrator | 2026-04-09 04:22:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:25.159154 | orchestrator | 2026-04-09 04:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:28.214713 | orchestrator | 2026-04-09 04:22:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:28.217078 | orchestrator | 2026-04-09 04:22:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:28.217151 | orchestrator | 2026-04-09 04:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:31.267096 | orchestrator | 2026-04-09 04:22:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:31.268892 | orchestrator | 2026-04-09 04:22:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:31.268986 | orchestrator | 2026-04-09 04:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:34.326660 | orchestrator | 2026-04-09 04:22:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:34.329213 | orchestrator | 2026-04-09 04:22:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:34.329239 | orchestrator | 2026-04-09 04:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:37.380068 | orchestrator | 2026-04-09 04:22:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:37.382934 | orchestrator | 2026-04-09 04:22:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:37.383000 | orchestrator | 2026-04-09 04:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:40.428341 | orchestrator | 2026-04-09 04:22:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:40.429765 | orchestrator | 2026-04-09 04:22:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:40.429926 | orchestrator | 2026-04-09 04:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:43.480150 | orchestrator | 2026-04-09 04:22:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:43.480865 | orchestrator | 2026-04-09 04:22:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:43.480937 | orchestrator | 2026-04-09 04:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:46.522474 | orchestrator | 2026-04-09 04:22:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:46.523658 | orchestrator | 2026-04-09 04:22:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:46.523868 | orchestrator | 2026-04-09 04:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:49.577114 | orchestrator | 2026-04-09 04:22:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:49.579918 | orchestrator | 2026-04-09 04:22:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:49.580009 | orchestrator | 2026-04-09 04:22:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:52.628009 | orchestrator | 2026-04-09 04:22:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:52.630886 | orchestrator | 2026-04-09 04:22:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:52.631043 | orchestrator | 2026-04-09 04:22:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:55.681257 | orchestrator | 2026-04-09 04:22:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:55.682826 | orchestrator | 2026-04-09 04:22:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:55.682891 | orchestrator | 2026-04-09 04:22:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:22:58.734393 | orchestrator | 2026-04-09 04:22:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:22:58.735550 | orchestrator | 2026-04-09 04:22:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:22:58.735576 | orchestrator | 2026-04-09 04:22:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:01.789588 | orchestrator | 2026-04-09 04:23:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:01.790860 | orchestrator | 2026-04-09 04:23:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:01.790928 | orchestrator | 2026-04-09 04:23:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:04.844910 | orchestrator | 2026-04-09 04:23:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:04.845430 | orchestrator | 2026-04-09 04:23:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:04.845444 | orchestrator | 2026-04-09 04:23:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:07.891708 | orchestrator | 2026-04-09 04:23:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:07.892828 | orchestrator | 2026-04-09 04:23:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:07.892887 | orchestrator | 2026-04-09 04:23:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:10.942857 | orchestrator | 2026-04-09 04:23:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:10.943507 | orchestrator | 2026-04-09 04:23:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:10.943569 | orchestrator | 2026-04-09 04:23:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:14.005959 | orchestrator | 2026-04-09 04:23:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:14.008923 | orchestrator | 2026-04-09 04:23:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:14.008997 | orchestrator | 2026-04-09 04:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:17.053673 | orchestrator | 2026-04-09 04:23:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:17.055773 | orchestrator | 2026-04-09 04:23:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:17.055815 | orchestrator | 2026-04-09 04:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:20.096543 | orchestrator | 2026-04-09 04:23:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:20.098846 | orchestrator | 2026-04-09 04:23:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:20.098990 | orchestrator | 2026-04-09 04:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:23.143157 | orchestrator | 2026-04-09 04:23:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:23.145804 | orchestrator | 2026-04-09 04:23:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:23.146193 | orchestrator | 2026-04-09 04:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:26.198417 | orchestrator | 2026-04-09 04:23:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:26.200501 | orchestrator | 2026-04-09 04:23:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:26.200702 | orchestrator | 2026-04-09 04:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:29.246309 | orchestrator | 2026-04-09 04:23:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:29.248005 | orchestrator | 2026-04-09 04:23:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:29.248066 | orchestrator | 2026-04-09 04:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:32.298217 | orchestrator | 2026-04-09 04:23:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:32.299832 | orchestrator | 2026-04-09 04:23:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:32.299874 | orchestrator | 2026-04-09 04:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:35.343300 | orchestrator | 2026-04-09 04:23:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:35.344554 | orchestrator | 2026-04-09 04:23:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:35.344898 | orchestrator | 2026-04-09 04:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:38.387664 | orchestrator | 2026-04-09 04:23:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:38.391025 | orchestrator | 2026-04-09 04:23:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:38.391196 | orchestrator | 2026-04-09 04:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:41.439714 | orchestrator | 2026-04-09 04:23:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:41.441166 | orchestrator | 2026-04-09 04:23:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:41.441274 | orchestrator | 2026-04-09 04:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:44.486449 | orchestrator | 2026-04-09 04:23:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:44.488073 | orchestrator | 2026-04-09 04:23:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:44.488130 | orchestrator | 2026-04-09 04:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:47.532136 | orchestrator | 2026-04-09 04:23:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:47.533094 | orchestrator | 2026-04-09 04:23:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:47.533561 | orchestrator | 2026-04-09 04:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:50.581609 | orchestrator | 2026-04-09 04:23:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:50.581885 | orchestrator | 2026-04-09 04:23:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:50.581909 | orchestrator | 2026-04-09 04:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:53.634340 | orchestrator | 2026-04-09 04:23:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:53.636982 | orchestrator | 2026-04-09 04:23:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:53.637133 | orchestrator | 2026-04-09 04:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:56.679053 | orchestrator | 2026-04-09 04:23:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:56.681014 | orchestrator | 2026-04-09 04:23:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:56.681069 | orchestrator | 2026-04-09 04:23:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:23:59.722111 | orchestrator | 2026-04-09 04:23:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:23:59.722974 | orchestrator | 2026-04-09 04:23:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:23:59.723047 | orchestrator | 2026-04-09 04:23:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:02.768239 | orchestrator | 2026-04-09 04:24:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:02.769670 | orchestrator | 2026-04-09 04:24:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:02.769757 | orchestrator | 2026-04-09 04:24:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:05.824277 | orchestrator | 2026-04-09 04:24:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:05.826557 | orchestrator | 2026-04-09 04:24:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:05.826663 | orchestrator | 2026-04-09 04:24:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:08.874114 | orchestrator | 2026-04-09 04:24:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:08.876017 | orchestrator | 2026-04-09 04:24:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:08.876099 | orchestrator | 2026-04-09 04:24:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:11.917117 | orchestrator | 2026-04-09 04:24:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:11.918897 | orchestrator | 2026-04-09 04:24:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:11.918932 | orchestrator | 2026-04-09 04:24:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:14.963324 | orchestrator | 2026-04-09 04:24:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:14.965433 | orchestrator | 2026-04-09 04:24:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:14.965630 | orchestrator | 2026-04-09 04:24:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:18.013701 | orchestrator | 2026-04-09 04:24:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:18.015823 | orchestrator | 2026-04-09 04:24:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:18.016020 | orchestrator | 2026-04-09 04:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:21.066639 | orchestrator | 2026-04-09 04:24:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:21.068808 | orchestrator | 2026-04-09 04:24:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:21.068862 | orchestrator | 2026-04-09 04:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:24.117286 | orchestrator | 2026-04-09 04:24:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:24.121643 | orchestrator | 2026-04-09 04:24:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:24.121745 | orchestrator | 2026-04-09 04:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:27.170186 | orchestrator | 2026-04-09 04:24:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:27.171830 | orchestrator | 2026-04-09 04:24:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:27.171885 | orchestrator | 2026-04-09 04:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:30.212008 | orchestrator | 2026-04-09 04:24:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:30.213101 | orchestrator | 2026-04-09 04:24:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:30.213134 | orchestrator | 2026-04-09 04:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:33.253850 | orchestrator | 2026-04-09 04:24:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:33.256045 | orchestrator | 2026-04-09 04:24:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:33.256098 | orchestrator | 2026-04-09 04:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:36.305010 | orchestrator | 2026-04-09 04:24:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:36.305649 | orchestrator | 2026-04-09 04:24:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:36.305739 | orchestrator | 2026-04-09 04:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:39.357484 | orchestrator | 2026-04-09 04:24:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:39.361459 | orchestrator | 2026-04-09 04:24:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:39.361540 | orchestrator | 2026-04-09 04:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:42.410578 | orchestrator | 2026-04-09 04:24:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:42.412170 | orchestrator | 2026-04-09 04:24:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:42.412213 | orchestrator | 2026-04-09 04:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:45.448673 | orchestrator | 2026-04-09 04:24:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:45.449979 | orchestrator | 2026-04-09 04:24:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:45.450078 | orchestrator | 2026-04-09 04:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:48.496332 | orchestrator | 2026-04-09 04:24:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:48.497843 | orchestrator | 2026-04-09 04:24:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:48.499161 | orchestrator | 2026-04-09 04:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:51.540411 | orchestrator | 2026-04-09 04:24:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:51.542258 | orchestrator | 2026-04-09 04:24:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:51.542409 | orchestrator | 2026-04-09 04:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:54.590320 | orchestrator | 2026-04-09 04:24:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:54.591337 | orchestrator | 2026-04-09 04:24:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:54.591371 | orchestrator | 2026-04-09 04:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:24:57.636759 | orchestrator | 2026-04-09 04:24:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:24:57.639109 | orchestrator | 2026-04-09 04:24:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:24:57.639206 | orchestrator | 2026-04-09 04:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:00.683029 | orchestrator | 2026-04-09 04:25:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:00.686072 | orchestrator | 2026-04-09 04:25:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:00.686200 | orchestrator | 2026-04-09 04:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:03.737119 | orchestrator | 2026-04-09 04:25:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:03.738321 | orchestrator | 2026-04-09 04:25:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:03.738412 | orchestrator | 2026-04-09 04:25:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:06.787811 | orchestrator | 2026-04-09 04:25:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:06.790247 | orchestrator | 2026-04-09 04:25:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:06.790395 | orchestrator | 2026-04-09 04:25:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:09.839581 | orchestrator | 2026-04-09 04:25:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:09.840341 | orchestrator | 2026-04-09 04:25:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:09.840396 | orchestrator | 2026-04-09 04:25:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:12.887003 | orchestrator | 2026-04-09 04:25:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:12.888585 | orchestrator | 2026-04-09 04:25:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:12.888624 | orchestrator | 2026-04-09 04:25:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:15.941378 | orchestrator | 2026-04-09 04:25:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:15.944963 | orchestrator | 2026-04-09 04:25:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:15.945083 | orchestrator | 2026-04-09 04:25:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:19.008075 | orchestrator | 2026-04-09 04:25:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:19.011327 | orchestrator | 2026-04-09 04:25:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:19.011378 | orchestrator | 2026-04-09 04:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:22.065974 | orchestrator | 2026-04-09 04:25:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:22.068315 | orchestrator | 2026-04-09 04:25:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:22.068365 | orchestrator | 2026-04-09 04:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:25.111109 | orchestrator | 2026-04-09 04:25:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:25.112050 | orchestrator | 2026-04-09 04:25:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:25.112110 | orchestrator | 2026-04-09 04:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:28.157961 | orchestrator | 2026-04-09 04:25:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:28.161083 | orchestrator | 2026-04-09 04:25:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:28.161150 | orchestrator | 2026-04-09 04:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:31.212985 | orchestrator | 2026-04-09 04:25:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:31.216414 | orchestrator | 2026-04-09 04:25:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:31.216472 | orchestrator | 2026-04-09 04:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:34.259179 | orchestrator | 2026-04-09 04:25:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:34.264288 | orchestrator | 2026-04-09 04:25:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:34.264385 | orchestrator | 2026-04-09 04:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:37.314601 | orchestrator | 2026-04-09 04:25:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:37.319276 | orchestrator | 2026-04-09 04:25:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:37.319355 | orchestrator | 2026-04-09 04:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:40.365849 | orchestrator | 2026-04-09 04:25:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:40.367537 | orchestrator | 2026-04-09 04:25:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:40.367589 | orchestrator | 2026-04-09 04:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:43.419372 | orchestrator | 2026-04-09 04:25:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:43.420503 | orchestrator | 2026-04-09 04:25:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:43.420559 | orchestrator | 2026-04-09 04:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:46.465877 | orchestrator | 2026-04-09 04:25:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:46.467200 | orchestrator | 2026-04-09 04:25:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:46.467307 | orchestrator | 2026-04-09 04:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:49.512432 | orchestrator | 2026-04-09 04:25:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:49.514888 | orchestrator | 2026-04-09 04:25:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:49.514926 | orchestrator | 2026-04-09 04:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:52.566892 | orchestrator | 2026-04-09 04:25:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:52.570561 | orchestrator | 2026-04-09 04:25:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:52.570644 | orchestrator | 2026-04-09 04:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:55.616000 | orchestrator | 2026-04-09 04:25:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:55.618619 | orchestrator | 2026-04-09 04:25:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:55.619047 | orchestrator | 2026-04-09 04:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:25:58.673123 | orchestrator | 2026-04-09 04:25:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:25:58.676121 | orchestrator | 2026-04-09 04:25:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:25:58.676188 | orchestrator | 2026-04-09 04:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:01.720693 | orchestrator | 2026-04-09 04:26:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:01.723839 | orchestrator | 2026-04-09 04:26:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:01.723917 | orchestrator | 2026-04-09 04:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:04.774275 | orchestrator | 2026-04-09 04:26:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:04.776457 | orchestrator | 2026-04-09 04:26:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:04.776484 | orchestrator | 2026-04-09 04:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:07.823051 | orchestrator | 2026-04-09 04:26:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:07.824224 | orchestrator | 2026-04-09 04:26:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:07.824264 | orchestrator | 2026-04-09 04:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:10.871688 | orchestrator | 2026-04-09 04:26:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:10.872877 | orchestrator | 2026-04-09 04:26:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:10.872906 | orchestrator | 2026-04-09 04:26:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:13.923406 | orchestrator | 2026-04-09 04:26:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:13.925677 | orchestrator | 2026-04-09 04:26:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:13.925894 | orchestrator | 2026-04-09 04:26:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:16.979067 | orchestrator | 2026-04-09 04:26:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:16.981067 | orchestrator | 2026-04-09 04:26:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:16.981130 | orchestrator | 2026-04-09 04:26:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:20.035352 | orchestrator | 2026-04-09 04:26:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:20.037492 | orchestrator | 2026-04-09 04:26:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:20.037756 | orchestrator | 2026-04-09 04:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:23.083256 | orchestrator | 2026-04-09 04:26:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:23.086115 | orchestrator | 2026-04-09 04:26:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:23.086194 | orchestrator | 2026-04-09 04:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:26.140121 | orchestrator | 2026-04-09 04:26:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:26.142211 | orchestrator | 2026-04-09 04:26:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:26.142349 | orchestrator | 2026-04-09 04:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:29.183228 | orchestrator | 2026-04-09 04:26:29 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:29.186850 | orchestrator | 2026-04-09 04:26:29 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:29.186894 | orchestrator | 2026-04-09 04:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:32.237040 | orchestrator | 2026-04-09 04:26:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:32.237772 | orchestrator | 2026-04-09 04:26:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:32.238201 | orchestrator | 2026-04-09 04:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:35.291066 | orchestrator | 2026-04-09 04:26:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:35.293977 | orchestrator | 2026-04-09 04:26:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:35.294108 | orchestrator | 2026-04-09 04:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:38.349644 | orchestrator | 2026-04-09 04:26:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:38.350835 | orchestrator | 2026-04-09 04:26:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:38.350869 | orchestrator | 2026-04-09 04:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:41.395167 | orchestrator | 2026-04-09 04:26:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:41.397704 | orchestrator | 2026-04-09 04:26:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:41.397752 | orchestrator | 2026-04-09 04:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:44.443992 | orchestrator | 2026-04-09 04:26:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:44.446311 | orchestrator | 2026-04-09 04:26:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:44.446364 | orchestrator | 2026-04-09 04:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:47.481821 | orchestrator | 2026-04-09 04:26:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:47.484311 | orchestrator | 2026-04-09 04:26:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:47.484375 | orchestrator | 2026-04-09 04:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:50.524796 | orchestrator | 2026-04-09 04:26:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:50.524977 | orchestrator | 2026-04-09 04:26:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:50.524997 | orchestrator | 2026-04-09 04:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:53.578287 | orchestrator | 2026-04-09 04:26:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:53.579688 | orchestrator | 2026-04-09 04:26:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:53.579722 | orchestrator | 2026-04-09 04:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:56.622395 | orchestrator | 2026-04-09 04:26:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:56.623478 | orchestrator | 2026-04-09 04:26:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:56.623528 | orchestrator | 2026-04-09 04:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:26:59.670977 | orchestrator | 2026-04-09 04:26:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:26:59.672612 | orchestrator | 2026-04-09 04:26:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:26:59.672811 | orchestrator | 2026-04-09 04:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:02.719295 | orchestrator | 2026-04-09 04:27:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:02.721521 | orchestrator | 2026-04-09 04:27:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:02.721578 | orchestrator | 2026-04-09 04:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:05.769515 | orchestrator | 2026-04-09 04:27:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:05.772414 | orchestrator | 2026-04-09 04:27:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:05.772467 | orchestrator | 2026-04-09 04:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:08.820470 | orchestrator | 2026-04-09 04:27:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:08.823362 | orchestrator | 2026-04-09 04:27:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:08.823399 | orchestrator | 2026-04-09 04:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:11.866623 | orchestrator | 2026-04-09 04:27:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:11.868469 | orchestrator | 2026-04-09 04:27:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:11.868528 | orchestrator | 2026-04-09 04:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:14.926461 | orchestrator | 2026-04-09 04:27:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:14.927716 | orchestrator | 2026-04-09 04:27:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:14.927764 | orchestrator | 2026-04-09 04:27:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:17.977739 | orchestrator | 2026-04-09 04:27:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:17.979454 | orchestrator | 2026-04-09 04:27:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:17.979487 | orchestrator | 2026-04-09 04:27:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:21.016948 | orchestrator | 2026-04-09 04:27:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:21.017630 | orchestrator | 2026-04-09 04:27:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:21.017654 | orchestrator | 2026-04-09 04:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:24.058634 | orchestrator | 2026-04-09 04:27:24 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:24.059261 | orchestrator | 2026-04-09 04:27:24 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:24.059303 | orchestrator | 2026-04-09 04:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:27.101335 | orchestrator | 2026-04-09 04:27:27 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:27.102485 | orchestrator | 2026-04-09 04:27:27 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:27.102531 | orchestrator | 2026-04-09 04:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:30.156077 | orchestrator | 2026-04-09 04:27:30 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:30.157576 | orchestrator | 2026-04-09 04:27:30 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:30.157594 | orchestrator | 2026-04-09 04:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:33.201817 | orchestrator | 2026-04-09 04:27:33 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:33.203022 | orchestrator | 2026-04-09 04:27:33 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:33.203067 | orchestrator | 2026-04-09 04:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:36.243909 | orchestrator | 2026-04-09 04:27:36 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:36.244226 | orchestrator | 2026-04-09 04:27:36 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:36.244257 | orchestrator | 2026-04-09 04:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:39.289888 | orchestrator | 2026-04-09 04:27:39 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:39.290521 | orchestrator | 2026-04-09 04:27:39 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:39.290622 | orchestrator | 2026-04-09 04:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:42.329439 | orchestrator | 2026-04-09 04:27:42 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:42.330685 | orchestrator | 2026-04-09 04:27:42 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:42.330816 | orchestrator | 2026-04-09 04:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:45.380111 | orchestrator | 2026-04-09 04:27:45 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:45.381179 | orchestrator | 2026-04-09 04:27:45 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:45.381246 | orchestrator | 2026-04-09 04:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:48.429205 | orchestrator | 2026-04-09 04:27:48 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:48.433069 | orchestrator | 2026-04-09 04:27:48 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:48.433115 | orchestrator | 2026-04-09 04:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:51.471725 | orchestrator | 2026-04-09 04:27:51 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:51.472679 | orchestrator | 2026-04-09 04:27:51 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:51.472702 | orchestrator | 2026-04-09 04:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:54.511343 | orchestrator | 2026-04-09 04:27:54 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:54.513038 | orchestrator | 2026-04-09 04:27:54 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:54.513066 | orchestrator | 2026-04-09 04:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:27:57.551486 | orchestrator | 2026-04-09 04:27:57 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:27:57.552422 | orchestrator | 2026-04-09 04:27:57 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:27:57.552543 | orchestrator | 2026-04-09 04:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:00.598740 | orchestrator | 2026-04-09 04:28:00 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:00.600172 | orchestrator | 2026-04-09 04:28:00 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:00.600220 | orchestrator | 2026-04-09 04:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:03.648017 | orchestrator | 2026-04-09 04:28:03 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:03.648662 | orchestrator | 2026-04-09 04:28:03 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:03.648770 | orchestrator | 2026-04-09 04:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:06.696269 | orchestrator | 2026-04-09 04:28:06 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:06.698973 | orchestrator | 2026-04-09 04:28:06 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:06.699033 | orchestrator | 2026-04-09 04:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:09.749538 | orchestrator | 2026-04-09 04:28:09 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:09.751445 | orchestrator | 2026-04-09 04:28:09 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:09.751512 | orchestrator | 2026-04-09 04:28:09 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:12.796936 | orchestrator | 2026-04-09 04:28:12 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:12.799131 | orchestrator | 2026-04-09 04:28:12 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:12.799193 | orchestrator | 2026-04-09 04:28:12 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:15.856685 | orchestrator | 2026-04-09 04:28:15 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:15.860253 | orchestrator | 2026-04-09 04:28:15 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:15.860280 | orchestrator | 2026-04-09 04:28:15 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:18.906438 | orchestrator | 2026-04-09 04:28:18 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:18.908482 | orchestrator | 2026-04-09 04:28:18 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:18.908542 | orchestrator | 2026-04-09 04:28:18 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:21.963214 | orchestrator | 2026-04-09 04:28:21 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:21.965601 | orchestrator | 2026-04-09 04:28:21 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:21.965662 | orchestrator | 2026-04-09 04:28:21 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:25.023192 | orchestrator | 2026-04-09 04:28:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:25.024643 | orchestrator | 2026-04-09 04:28:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:25.024678 | orchestrator | 2026-04-09 04:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:28.066913 | orchestrator | 2026-04-09 04:28:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:28.067741 | orchestrator | 2026-04-09 04:28:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:28.068486 | orchestrator | 2026-04-09 04:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:31.116314 | orchestrator | 2026-04-09 04:28:31 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:31.118337 | orchestrator | 2026-04-09 04:28:31 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:31.118394 | orchestrator | 2026-04-09 04:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:34.158838 | orchestrator | 2026-04-09 04:28:34 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:34.159920 | orchestrator | 2026-04-09 04:28:34 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:34.160008 | orchestrator | 2026-04-09 04:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:37.201209 | orchestrator | 2026-04-09 04:28:37 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:37.201708 | orchestrator | 2026-04-09 04:28:37 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:37.201742 | orchestrator | 2026-04-09 04:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:40.242936 | orchestrator | 2026-04-09 04:28:40 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:40.244723 | orchestrator | 2026-04-09 04:28:40 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:40.244931 | orchestrator | 2026-04-09 04:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:43.288921 | orchestrator | 2026-04-09 04:28:43 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:43.290810 | orchestrator | 2026-04-09 04:28:43 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:43.290883 | orchestrator | 2026-04-09 04:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:46.338776 | orchestrator | 2026-04-09 04:28:46 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:46.341283 | orchestrator | 2026-04-09 04:28:46 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:46.341338 | orchestrator | 2026-04-09 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:49.384904 | orchestrator | 2026-04-09 04:28:49 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:49.386822 | orchestrator | 2026-04-09 04:28:49 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:49.386908 | orchestrator | 2026-04-09 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:52.423019 | orchestrator | 2026-04-09 04:28:52 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:52.424644 | orchestrator | 2026-04-09 04:28:52 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:52.424696 | orchestrator | 2026-04-09 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:55.467439 | orchestrator | 2026-04-09 04:28:55 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:55.467799 | orchestrator | 2026-04-09 04:28:55 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:55.467829 | orchestrator | 2026-04-09 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:28:58.508232 | orchestrator | 2026-04-09 04:28:58 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:28:58.509586 | orchestrator | 2026-04-09 04:28:58 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:28:58.509789 | orchestrator | 2026-04-09 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:01.543046 | orchestrator | 2026-04-09 04:29:01 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:01.543398 | orchestrator | 2026-04-09 04:29:01 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:01.543426 | orchestrator | 2026-04-09 04:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:04.589124 | orchestrator | 2026-04-09 04:29:04 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:04.591584 | orchestrator | 2026-04-09 04:29:04 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:04.591666 | orchestrator | 2026-04-09 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:07.637864 | orchestrator | 2026-04-09 04:29:07 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:07.639649 | orchestrator | 2026-04-09 04:29:07 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:07.639696 | orchestrator | 2026-04-09 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:10.695163 | orchestrator | 2026-04-09 04:29:10 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:10.698363 | orchestrator | 2026-04-09 04:29:10 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:10.698424 | orchestrator | 2026-04-09 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:13.747081 | orchestrator | 2026-04-09 04:29:13 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:13.748508 | orchestrator | 2026-04-09 04:29:13 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:13.748572 | orchestrator | 2026-04-09 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:16.794283 | orchestrator | 2026-04-09 04:29:16 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:16.795144 | orchestrator | 2026-04-09 04:29:16 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:16.795205 | orchestrator | 2026-04-09 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:19.839798 | orchestrator | 2026-04-09 04:29:19 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:19.844126 | orchestrator | 2026-04-09 04:29:19 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:19.844220 | orchestrator | 2026-04-09 04:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:22.890002 | orchestrator | 2026-04-09 04:29:22 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:22.892813 | orchestrator | 2026-04-09 04:29:22 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:22.892884 | orchestrator | 2026-04-09 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:25.942696 | orchestrator | 2026-04-09 04:29:25 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:25.943933 | orchestrator | 2026-04-09 04:29:25 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:25.943986 | orchestrator | 2026-04-09 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:28.987544 | orchestrator | 2026-04-09 04:29:28 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:28.989787 | orchestrator | 2026-04-09 04:29:28 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:28.989863 | orchestrator | 2026-04-09 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:32.041476 | orchestrator | 2026-04-09 04:29:32 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:32.041732 | orchestrator | 2026-04-09 04:29:32 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:32.042014 | orchestrator | 2026-04-09 04:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:35.088670 | orchestrator | 2026-04-09 04:29:35 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:35.093008 | orchestrator | 2026-04-09 04:29:35 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:35.093161 | orchestrator | 2026-04-09 04:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:38.143838 | orchestrator | 2026-04-09 04:29:38 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:38.145569 | orchestrator | 2026-04-09 04:29:38 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:38.145601 | orchestrator | 2026-04-09 04:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:41.198314 | orchestrator | 2026-04-09 04:29:41 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:41.199811 | orchestrator | 2026-04-09 04:29:41 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:41.199844 | orchestrator | 2026-04-09 04:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:44.250884 | orchestrator | 2026-04-09 04:29:44 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:44.252551 | orchestrator | 2026-04-09 04:29:44 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:44.252636 | orchestrator | 2026-04-09 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:47.295026 | orchestrator | 2026-04-09 04:29:47 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:47.297678 | orchestrator | 2026-04-09 04:29:47 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:47.297969 | orchestrator | 2026-04-09 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:50.350771 | orchestrator | 2026-04-09 04:29:50 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:50.351997 | orchestrator | 2026-04-09 04:29:50 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:50.352159 | orchestrator | 2026-04-09 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:53.397882 | orchestrator | 2026-04-09 04:29:53 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:53.398995 | orchestrator | 2026-04-09 04:29:53 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:53.399020 | orchestrator | 2026-04-09 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:56.443820 | orchestrator | 2026-04-09 04:29:56 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:56.445307 | orchestrator | 2026-04-09 04:29:56 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:56.445341 | orchestrator | 2026-04-09 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:29:59.496125 | orchestrator | 2026-04-09 04:29:59 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:29:59.499396 | orchestrator | 2026-04-09 04:29:59 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:29:59.499455 | orchestrator | 2026-04-09 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:02.546481 | orchestrator | 2026-04-09 04:30:02 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:02.549909 | orchestrator | 2026-04-09 04:30:02 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:02.549946 | orchestrator | 2026-04-09 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:05.593978 | orchestrator | 2026-04-09 04:30:05 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:05.594773 | orchestrator | 2026-04-09 04:30:05 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:05.594832 | orchestrator | 2026-04-09 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:08.641906 | orchestrator | 2026-04-09 04:30:08 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:08.642768 | orchestrator | 2026-04-09 04:30:08 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:08.642804 | orchestrator | 2026-04-09 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:11.692646 | orchestrator | 2026-04-09 04:30:11 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:11.694508 | orchestrator | 2026-04-09 04:30:11 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:11.694558 | orchestrator | 2026-04-09 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:14.750888 | orchestrator | 2026-04-09 04:30:14 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:14.752251 | orchestrator | 2026-04-09 04:30:14 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:14.752332 | orchestrator | 2026-04-09 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:17.817324 | orchestrator | 2026-04-09 04:30:17 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:17.818878 | orchestrator | 2026-04-09 04:30:17 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:17.818936 | orchestrator | 2026-04-09 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:20.872426 | orchestrator | 2026-04-09 04:30:20 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:20.874476 | orchestrator | 2026-04-09 04:30:20 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:20.874702 | orchestrator | 2026-04-09 04:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:23.918796 | orchestrator | 2026-04-09 04:30:23 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:23.921027 | orchestrator | 2026-04-09 04:30:23 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:23.921176 | orchestrator | 2026-04-09 04:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:26.974896 | orchestrator | 2026-04-09 04:30:26 | INFO  | Task 6828e9fb-0b8a-4283-9fa1-3c6673200e24 is in state STARTED 2026-04-09 04:30:26.976915 | orchestrator | 2026-04-09 04:30:26 | INFO  | Task 4918e61a-8c4a-42f2-9f33-2d15624c1ede is in state STARTED 2026-04-09 04:30:26.976992 | orchestrator | 2026-04-09 04:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-09 04:30:27.440829 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-09 04:30:27.445238 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 04:30:28.288289 | 2026-04-09 04:30:28.288522 | PLAY [Post output play] 2026-04-09 04:30:28.308792 | 2026-04-09 04:30:28.308996 | LOOP [stage-output : Register sources] 2026-04-09 04:30:28.380015 | 2026-04-09 04:30:28.380400 | TASK [stage-output : Check sudo] 2026-04-09 04:30:29.284043 | orchestrator | sudo: a password is required 2026-04-09 04:30:29.423482 | orchestrator | ok: Runtime: 0:00:00.014043 2026-04-09 04:30:29.437906 | 2026-04-09 04:30:29.438095 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-09 04:30:29.471550 | 2026-04-09 04:30:29.471819 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-09 04:30:29.551324 | orchestrator | ok 2026-04-09 04:30:29.560313 | 2026-04-09 04:30:29.560470 | LOOP [stage-output : Ensure target folders exist] 2026-04-09 04:30:30.047746 | orchestrator | ok: "docs" 2026-04-09 04:30:30.048182 | 2026-04-09 04:30:30.300740 | orchestrator | ok: "artifacts" 2026-04-09 04:30:30.581265 | orchestrator | ok: "logs" 2026-04-09 04:30:30.602546 | 2026-04-09 04:30:30.602786 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-09 04:30:30.642271 | 2026-04-09 04:30:30.642557 | TASK [stage-output : Make all log files readable] 2026-04-09 04:30:30.959508 | orchestrator | ok 2026-04-09 04:30:30.968864 | 2026-04-09 04:30:30.969023 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-09 04:30:30.994088 | orchestrator | skipping: Conditional result was False 2026-04-09 04:30:31.014071 | 2026-04-09 04:30:31.014282 | TASK [stage-output : Discover log files for compression] 2026-04-09 04:30:31.032012 | orchestrator | skipping: Conditional result was False 2026-04-09 04:30:31.042754 | 2026-04-09 04:30:31.042932 | LOOP [stage-output : Archive everything from logs] 2026-04-09 04:30:31.101100 | 2026-04-09 04:30:31.101385 | PLAY [Post cleanup play] 2026-04-09 04:30:31.114494 | 2026-04-09 04:30:31.114654 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 04:30:31.175803 | orchestrator | ok 2026-04-09 04:30:31.186557 | 2026-04-09 04:30:31.186724 | TASK [Set cloud fact (local deployment)] 2026-04-09 04:30:31.221539 | orchestrator | skipping: Conditional result was False 2026-04-09 04:30:31.237595 | 2026-04-09 04:30:31.237799 | TASK [Clean the cloud environment] 2026-04-09 04:30:33.227836 | orchestrator | 2026-04-09 04:30:33 - clean up servers 2026-04-09 04:30:34.127551 | orchestrator | 2026-04-09 04:30:34 - testbed-manager 2026-04-09 04:30:34.209482 | orchestrator | 2026-04-09 04:30:34 - testbed-node-4 2026-04-09 04:30:34.300643 | orchestrator | 2026-04-09 04:30:34 - testbed-node-0 2026-04-09 04:30:34.387226 | orchestrator | 2026-04-09 04:30:34 - testbed-node-3 2026-04-09 04:30:34.472840 | orchestrator | 2026-04-09 04:30:34 - testbed-node-5 2026-04-09 04:30:34.577802 | orchestrator | 2026-04-09 04:30:34 - testbed-node-1 2026-04-09 04:30:34.958570 | orchestrator | 2026-04-09 04:30:34 - testbed-node-2 2026-04-09 04:30:35.056239 | orchestrator | 2026-04-09 04:30:35 - clean up keypairs 2026-04-09 04:30:35.076418 | orchestrator | 2026-04-09 04:30:35 - testbed 2026-04-09 04:30:35.106893 | orchestrator | 2026-04-09 04:30:35 - wait for servers to be gone 2026-04-09 04:30:43.903949 | orchestrator | 2026-04-09 04:30:43 - clean up ports 2026-04-09 04:30:44.128073 | orchestrator | 2026-04-09 04:30:44 - 1c90fada-8d60-40a0-b20e-140df2ae132d 2026-04-09 04:30:44.388904 | orchestrator | 2026-04-09 04:30:44 - 24fc093e-7cbb-4e5c-9e56-50f93a620f5c 2026-04-09 04:30:44.735914 | orchestrator | 2026-04-09 04:30:44 - 26d25791-9094-41aa-9a00-4e9b02ae46fc 2026-04-09 04:30:45.049766 | orchestrator | 2026-04-09 04:30:45 - 3efebdbb-2774-416a-aa55-07572cd33f59 2026-04-09 04:30:45.304397 | orchestrator | 2026-04-09 04:30:45 - 55c83c3f-bb6e-4c1f-b845-5716a9a75a6f 2026-04-09 04:30:45.535874 | orchestrator | 2026-04-09 04:30:45 - 6faa6520-7ba1-420f-9a8d-b2dd5bed4d65 2026-04-09 04:30:45.942595 | orchestrator | 2026-04-09 04:30:45 - 8c1f28e0-6df0-45d7-93a1-d5e899179ec6 2026-04-09 04:30:46.178677 | orchestrator | 2026-04-09 04:30:46 - clean up volumes 2026-04-09 04:30:46.307528 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-2-node-base 2026-04-09 04:30:46.350611 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-1-node-base 2026-04-09 04:30:46.392510 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-4-node-base 2026-04-09 04:30:46.436038 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-3-node-base 2026-04-09 04:30:46.480975 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-0-node-base 2026-04-09 04:30:46.522999 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-5-node-base 2026-04-09 04:30:46.564585 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-manager-base 2026-04-09 04:30:46.611031 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-5-node-5 2026-04-09 04:30:46.658318 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-6-node-3 2026-04-09 04:30:46.704427 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-3-node-3 2026-04-09 04:30:46.748677 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-1-node-4 2026-04-09 04:30:46.797124 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-4-node-4 2026-04-09 04:30:46.840725 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-7-node-4 2026-04-09 04:30:46.885195 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-8-node-5 2026-04-09 04:30:46.931187 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-0-node-3 2026-04-09 04:30:46.977842 | orchestrator | 2026-04-09 04:30:46 - testbed-volume-2-node-5 2026-04-09 04:30:47.027224 | orchestrator | 2026-04-09 04:30:47 - disconnect routers 2026-04-09 04:30:47.169977 | orchestrator | 2026-04-09 04:30:47 - testbed 2026-04-09 04:30:48.260503 | orchestrator | 2026-04-09 04:30:48 - clean up subnets 2026-04-09 04:30:48.328286 | orchestrator | 2026-04-09 04:30:48 - subnet-testbed-management 2026-04-09 04:30:48.514676 | orchestrator | 2026-04-09 04:30:48 - clean up networks 2026-04-09 04:30:48.707276 | orchestrator | 2026-04-09 04:30:48 - net-testbed-management 2026-04-09 04:30:49.042906 | orchestrator | 2026-04-09 04:30:49 - clean up security groups 2026-04-09 04:30:49.082176 | orchestrator | 2026-04-09 04:30:49 - testbed-management 2026-04-09 04:30:49.204256 | orchestrator | 2026-04-09 04:30:49 - testbed-node 2026-04-09 04:30:49.321749 | orchestrator | 2026-04-09 04:30:49 - clean up floating ips 2026-04-09 04:30:49.362421 | orchestrator | 2026-04-09 04:30:49 - 81.163.193.59 2026-04-09 04:30:49.903061 | orchestrator | 2026-04-09 04:30:49 - clean up routers 2026-04-09 04:30:50.015393 | orchestrator | 2026-04-09 04:30:50 - testbed 2026-04-09 04:30:51.792366 | orchestrator | ok: Runtime: 0:00:19.799757 2026-04-09 04:30:51.799094 | 2026-04-09 04:30:51.799263 | PLAY RECAP 2026-04-09 04:30:51.799395 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-09 04:30:51.799458 | 2026-04-09 04:30:51.958885 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-09 04:30:51.960099 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 04:30:52.724439 | 2026-04-09 04:30:52.724639 | PLAY [Cleanup play] 2026-04-09 04:30:52.742741 | 2026-04-09 04:30:52.742952 | TASK [Set cloud fact (Zuul deployment)] 2026-04-09 04:30:52.805051 | orchestrator | ok 2026-04-09 04:30:52.816228 | 2026-04-09 04:30:52.816444 | TASK [Set cloud fact (local deployment)] 2026-04-09 04:30:52.851990 | orchestrator | skipping: Conditional result was False 2026-04-09 04:30:52.867193 | 2026-04-09 04:30:52.867355 | TASK [Clean the cloud environment] 2026-04-09 04:30:54.083176 | orchestrator | 2026-04-09 04:30:54 - clean up servers 2026-04-09 04:30:54.706879 | orchestrator | 2026-04-09 04:30:54 - clean up keypairs 2026-04-09 04:30:54.729739 | orchestrator | 2026-04-09 04:30:54 - wait for servers to be gone 2026-04-09 04:30:54.775504 | orchestrator | 2026-04-09 04:30:54 - clean up ports 2026-04-09 04:30:54.856017 | orchestrator | 2026-04-09 04:30:54 - clean up volumes 2026-04-09 04:30:54.928598 | orchestrator | 2026-04-09 04:30:54 - disconnect routers 2026-04-09 04:30:54.961410 | orchestrator | 2026-04-09 04:30:54 - clean up subnets 2026-04-09 04:30:54.992480 | orchestrator | 2026-04-09 04:30:54 - clean up networks 2026-04-09 04:30:55.151551 | orchestrator | 2026-04-09 04:30:55 - clean up security groups 2026-04-09 04:30:55.189908 | orchestrator | 2026-04-09 04:30:55 - clean up floating ips 2026-04-09 04:30:55.225165 | orchestrator | 2026-04-09 04:30:55 - clean up routers 2026-04-09 04:30:55.408968 | orchestrator | ok: Runtime: 0:00:01.553389 2026-04-09 04:30:55.411808 | 2026-04-09 04:30:55.412019 | PLAY RECAP 2026-04-09 04:30:55.412101 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-09 04:30:55.412131 | 2026-04-09 04:30:55.584382 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-09 04:30:55.585572 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 04:30:56.413554 | 2026-04-09 04:30:56.413755 | PLAY [Base post-fetch] 2026-04-09 04:30:56.432331 | 2026-04-09 04:30:56.432488 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-09 04:30:56.498217 | orchestrator | skipping: Conditional result was False 2026-04-09 04:30:56.507479 | 2026-04-09 04:30:56.507642 | TASK [fetch-output : Set log path for single node] 2026-04-09 04:30:56.555251 | orchestrator | ok 2026-04-09 04:30:56.561822 | 2026-04-09 04:30:56.561967 | LOOP [fetch-output : Ensure local output dirs] 2026-04-09 04:30:57.105354 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/logs" 2026-04-09 04:30:57.440604 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/artifacts" 2026-04-09 04:30:57.751988 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/37b66db7046a43208c813cef6fe11a97/work/docs" 2026-04-09 04:30:57.772937 | 2026-04-09 04:30:57.773090 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-09 04:30:58.788240 | orchestrator | changed: .d..t...... ./ 2026-04-09 04:30:58.788538 | orchestrator | changed: All items complete 2026-04-09 04:30:58.788584 | 2026-04-09 04:30:59.546316 | orchestrator | changed: .d..t...... ./ 2026-04-09 04:31:00.292776 | orchestrator | changed: .d..t...... ./ 2026-04-09 04:31:00.331010 | 2026-04-09 04:31:00.331280 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-09 04:31:00.372742 | orchestrator | skipping: Conditional result was False 2026-04-09 04:31:00.376335 | orchestrator | skipping: Conditional result was False 2026-04-09 04:31:00.396885 | 2026-04-09 04:31:00.397076 | PLAY RECAP 2026-04-09 04:31:00.397169 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-09 04:31:00.397217 | 2026-04-09 04:31:00.571207 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-09 04:31:00.573753 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 04:31:01.338030 | 2026-04-09 04:31:01.338204 | PLAY [Base post] 2026-04-09 04:31:01.352881 | 2026-04-09 04:31:01.353022 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-09 04:31:02.348006 | orchestrator | changed 2026-04-09 04:31:02.357432 | 2026-04-09 04:31:02.357563 | PLAY RECAP 2026-04-09 04:31:02.357632 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-09 04:31:02.357721 | 2026-04-09 04:31:02.517569 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-09 04:31:02.520190 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-09 04:31:03.391815 | 2026-04-09 04:31:03.392017 | PLAY [Base post-logs] 2026-04-09 04:31:03.404033 | 2026-04-09 04:31:03.404232 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-09 04:31:03.877297 | localhost | changed 2026-04-09 04:31:03.887481 | 2026-04-09 04:31:03.887628 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-09 04:31:03.924328 | localhost | ok 2026-04-09 04:31:03.929567 | 2026-04-09 04:31:03.929731 | TASK [Set zuul-log-path fact] 2026-04-09 04:31:03.957607 | localhost | ok 2026-04-09 04:31:03.973122 | 2026-04-09 04:31:03.973289 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-09 04:31:04.022309 | localhost | ok 2026-04-09 04:31:04.029802 | 2026-04-09 04:31:04.030006 | TASK [upload-logs : Create log directories] 2026-04-09 04:31:04.596350 | localhost | changed 2026-04-09 04:31:04.601670 | 2026-04-09 04:31:04.601864 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-09 04:31:05.099947 | localhost -> localhost | ok: Runtime: 0:00:00.007560 2026-04-09 04:31:05.109519 | 2026-04-09 04:31:05.109788 | TASK [upload-logs : Upload logs to log server] 2026-04-09 04:31:05.716842 | localhost | Output suppressed because no_log was given 2026-04-09 04:31:05.718764 | 2026-04-09 04:31:05.718952 | LOOP [upload-logs : Compress console log and json output] 2026-04-09 04:31:05.783305 | localhost | skipping: Conditional result was False 2026-04-09 04:31:05.788167 | localhost | skipping: Conditional result was False 2026-04-09 04:31:05.801000 | 2026-04-09 04:31:05.801279 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-09 04:31:05.851801 | localhost | skipping: Conditional result was False 2026-04-09 04:31:05.852254 | 2026-04-09 04:31:05.856783 | localhost | skipping: Conditional result was False 2026-04-09 04:31:05.863089 | 2026-04-09 04:31:05.863249 | LOOP [upload-logs : Upload console log and json output]